00:00:00.002 Started by upstream project "autotest-per-patch" build number 132385 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.077 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.229 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.267 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.267 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.062 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.075 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.086 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.086 > git config core.sparsecheckout # timeout=10 00:00:08.097 > git read-tree -mu HEAD # timeout=10 00:00:08.112 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.132 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.132 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.218 [Pipeline] Start of Pipeline 00:00:08.231 [Pipeline] library 00:00:08.232 Loading library shm_lib@master 00:00:08.233 Library shm_lib@master is cached. Copying from home. 00:00:08.250 [Pipeline] node 00:00:08.268 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:08.270 [Pipeline] { 00:00:08.279 [Pipeline] catchError 00:00:08.280 [Pipeline] { 00:00:08.290 [Pipeline] wrap 00:00:08.296 [Pipeline] { 00:00:08.304 [Pipeline] stage 00:00:08.306 [Pipeline] { (Prologue) 00:00:08.323 [Pipeline] echo 00:00:08.324 Node: VM-host-SM17 00:00:08.330 [Pipeline] cleanWs 00:00:08.340 [WS-CLEANUP] Deleting project workspace... 00:00:08.340 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.346 [WS-CLEANUP] done 00:00:08.535 [Pipeline] setCustomBuildProperty 00:00:08.603 [Pipeline] httpRequest 00:00:09.664 [Pipeline] echo 00:00:09.666 Sorcerer 10.211.164.20 is alive 00:00:09.677 [Pipeline] retry 00:00:09.680 [Pipeline] { 00:00:09.694 [Pipeline] httpRequest 00:00:09.699 HttpMethod: GET 00:00:09.700 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.700 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.702 Response Code: HTTP/1.1 200 OK 00:00:09.703 Success: Status code 200 is in the accepted range: 200,404 00:00:09.703 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.855 [Pipeline] } 00:00:10.872 [Pipeline] // retry 00:00:10.880 [Pipeline] sh 00:00:11.161 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.177 [Pipeline] httpRequest 00:00:11.575 [Pipeline] echo 00:00:11.577 Sorcerer 10.211.164.20 is alive 00:00:11.583 [Pipeline] retry 00:00:11.585 [Pipeline] { 00:00:11.595 [Pipeline] httpRequest 00:00:11.599 HttpMethod: GET 00:00:11.599 URL: http://10.211.164.20/packages/spdk_c0b2ac5c9de998fd08ca2f8d2ce4ba7a1b1d7563.tar.gz 00:00:11.600 Sending request to url: http://10.211.164.20/packages/spdk_c0b2ac5c9de998fd08ca2f8d2ce4ba7a1b1d7563.tar.gz 00:00:11.621 Response Code: HTTP/1.1 200 OK 00:00:11.621 Success: Status code 200 is in the accepted range: 200,404 00:00:11.622 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_c0b2ac5c9de998fd08ca2f8d2ce4ba7a1b1d7563.tar.gz 00:04:03.253 [Pipeline] } 00:04:03.270 [Pipeline] // retry 00:04:03.277 [Pipeline] sh 00:04:03.557 + tar --no-same-owner -xf spdk_c0b2ac5c9de998fd08ca2f8d2ce4ba7a1b1d7563.tar.gz 00:04:06.923 [Pipeline] sh 00:04:07.205 + git -C spdk log --oneline -n5 00:04:07.205 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:04:07.205 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:04:07.205 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:04:07.205 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:04:07.205 f86091626 dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:04:07.224 [Pipeline] writeFile 00:04:07.238 [Pipeline] sh 00:04:07.534 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:07.547 [Pipeline] sh 00:04:07.827 + cat autorun-spdk.conf 00:04:07.827 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:07.827 SPDK_RUN_ASAN=1 00:04:07.827 SPDK_RUN_UBSAN=1 00:04:07.827 SPDK_TEST_RAID=1 00:04:07.827 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:07.835 RUN_NIGHTLY=0 00:04:07.837 [Pipeline] } 00:04:07.851 [Pipeline] // stage 00:04:07.864 [Pipeline] stage 00:04:07.866 [Pipeline] { (Run VM) 00:04:07.878 [Pipeline] sh 00:04:08.159 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:08.159 + echo 'Start stage prepare_nvme.sh' 00:04:08.159 Start stage prepare_nvme.sh 00:04:08.159 + [[ -n 7 ]] 00:04:08.159 + disk_prefix=ex7 00:04:08.159 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:04:08.159 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:04:08.159 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:04:08.159 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:08.159 ++ SPDK_RUN_ASAN=1 00:04:08.159 ++ SPDK_RUN_UBSAN=1 00:04:08.159 ++ SPDK_TEST_RAID=1 00:04:08.159 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:08.159 ++ RUN_NIGHTLY=0 00:04:08.159 + cd /var/jenkins/workspace/raid-vg-autotest 00:04:08.159 + nvme_files=() 00:04:08.159 + declare -A nvme_files 00:04:08.159 + backend_dir=/var/lib/libvirt/images/backends 00:04:08.159 + nvme_files['nvme.img']=5G 00:04:08.159 + nvme_files['nvme-cmb.img']=5G 00:04:08.159 + nvme_files['nvme-multi0.img']=4G 00:04:08.159 + nvme_files['nvme-multi1.img']=4G 00:04:08.159 + nvme_files['nvme-multi2.img']=4G 00:04:08.159 + nvme_files['nvme-openstack.img']=8G 00:04:08.159 + nvme_files['nvme-zns.img']=5G 00:04:08.159 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:08.159 + (( SPDK_TEST_FTL == 1 )) 00:04:08.159 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:08.159 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:08.159 + for nvme in "${!nvme_files[@]}" 00:04:08.159 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:04:08.159 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:08.159 + for nvme in "${!nvme_files[@]}" 00:04:08.159 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:04:08.159 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:08.159 + for nvme in "${!nvme_files[@]}" 00:04:08.159 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:04:08.159 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:08.159 + for nvme in "${!nvme_files[@]}" 00:04:08.159 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:04:08.159 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:08.159 + for nvme in "${!nvme_files[@]}" 00:04:08.159 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:04:08.159 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:08.159 + for nvme in "${!nvme_files[@]}" 00:04:08.159 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:04:08.159 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:08.159 + for nvme in "${!nvme_files[@]}" 00:04:08.159 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:04:08.418 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:08.418 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:04:08.418 + echo 'End stage prepare_nvme.sh' 00:04:08.418 End stage prepare_nvme.sh 00:04:08.429 [Pipeline] sh 00:04:08.710 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:08.710 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:04:08.710 00:04:08.710 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:04:08.710 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:04:08.710 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:04:08.710 HELP=0 00:04:08.710 DRY_RUN=0 00:04:08.711 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:04:08.711 NVME_DISKS_TYPE=nvme,nvme, 00:04:08.711 NVME_AUTO_CREATE=0 00:04:08.711 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:04:08.711 NVME_CMB=,, 00:04:08.711 NVME_PMR=,, 00:04:08.711 NVME_ZNS=,, 00:04:08.711 NVME_MS=,, 00:04:08.711 NVME_FDP=,, 00:04:08.711 SPDK_VAGRANT_DISTRO=fedora39 00:04:08.711 SPDK_VAGRANT_VMCPU=10 00:04:08.711 SPDK_VAGRANT_VMRAM=12288 00:04:08.711 SPDK_VAGRANT_PROVIDER=libvirt 00:04:08.711 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:08.711 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:08.711 SPDK_OPENSTACK_NETWORK=0 00:04:08.711 VAGRANT_PACKAGE_BOX=0 00:04:08.711 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:08.711 FORCE_DISTRO=true 00:04:08.711 VAGRANT_BOX_VERSION= 00:04:08.711 EXTRA_VAGRANTFILES= 00:04:08.711 NIC_MODEL=e1000 00:04:08.711 00:04:08.711 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:04:08.711 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:04:11.999 Bringing machine 'default' up with 'libvirt' provider... 00:04:12.257 ==> default: Creating image (snapshot of base box volume). 00:04:12.257 ==> default: Creating domain with the following settings... 00:04:12.257 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732101379_d63d0699d45739b7ddc5 00:04:12.257 ==> default: -- Domain type: kvm 00:04:12.257 ==> default: -- Cpus: 10 00:04:12.257 ==> default: -- Feature: acpi 00:04:12.257 ==> default: -- Feature: apic 00:04:12.257 ==> default: -- Feature: pae 00:04:12.257 ==> default: -- Memory: 12288M 00:04:12.257 ==> default: -- Memory Backing: hugepages: 00:04:12.257 ==> default: -- Management MAC: 00:04:12.257 ==> default: -- Loader: 00:04:12.257 ==> default: -- Nvram: 00:04:12.257 ==> default: -- Base box: spdk/fedora39 00:04:12.257 ==> default: -- Storage pool: default 00:04:12.257 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732101379_d63d0699d45739b7ddc5.img (20G) 00:04:12.257 ==> default: -- Volume Cache: default 00:04:12.257 ==> default: -- Kernel: 00:04:12.257 ==> default: -- Initrd: 00:04:12.257 ==> default: -- Graphics Type: vnc 00:04:12.257 ==> default: -- Graphics Port: -1 00:04:12.257 ==> default: -- Graphics IP: 127.0.0.1 00:04:12.257 ==> default: -- Graphics Password: Not defined 00:04:12.257 ==> default: -- Video Type: cirrus 00:04:12.257 ==> default: -- Video VRAM: 9216 00:04:12.257 ==> default: -- Sound Type: 00:04:12.257 ==> default: -- Keymap: en-us 00:04:12.257 ==> default: -- TPM Path: 00:04:12.257 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:12.257 ==> default: -- Command line args: 00:04:12.257 ==> default: -> value=-device, 00:04:12.257 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:12.257 ==> default: -> value=-drive, 00:04:12.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:04:12.257 ==> default: -> value=-device, 00:04:12.257 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:12.257 ==> default: -> value=-device, 00:04:12.257 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:12.257 ==> default: -> value=-drive, 00:04:12.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:12.257 ==> default: -> value=-device, 00:04:12.257 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:12.257 ==> default: -> value=-drive, 00:04:12.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:12.257 ==> default: -> value=-device, 00:04:12.257 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:12.257 ==> default: -> value=-drive, 00:04:12.257 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:12.257 ==> default: -> value=-device, 00:04:12.257 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:12.515 ==> default: Creating shared folders metadata... 00:04:12.515 ==> default: Starting domain. 00:04:13.891 ==> default: Waiting for domain to get an IP address... 00:04:31.994 ==> default: Waiting for SSH to become available... 00:04:31.994 ==> default: Configuring and enabling network interfaces... 00:04:34.526 default: SSH address: 192.168.121.175:22 00:04:34.526 default: SSH username: vagrant 00:04:34.526 default: SSH auth method: private key 00:04:36.431 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:44.551 ==> default: Mounting SSHFS shared folder... 00:04:45.926 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:45.926 ==> default: Checking Mount.. 00:04:46.910 ==> default: Folder Successfully Mounted! 00:04:46.910 ==> default: Running provisioner: file... 00:04:47.844 default: ~/.gitconfig => .gitconfig 00:04:48.102 00:04:48.102 SUCCESS! 00:04:48.102 00:04:48.102 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:48.102 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:48.102 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:48.102 00:04:48.110 [Pipeline] } 00:04:48.126 [Pipeline] // stage 00:04:48.135 [Pipeline] dir 00:04:48.135 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:04:48.137 [Pipeline] { 00:04:48.149 [Pipeline] catchError 00:04:48.150 [Pipeline] { 00:04:48.161 [Pipeline] sh 00:04:48.438 + vagrant ssh-config --host vagrant 00:04:48.438 + sed -ne /^Host/,$p 00:04:48.438 + tee ssh_conf 00:04:51.720 Host vagrant 00:04:51.720 HostName 192.168.121.175 00:04:51.720 User vagrant 00:04:51.720 Port 22 00:04:51.720 UserKnownHostsFile /dev/null 00:04:51.720 StrictHostKeyChecking no 00:04:51.720 PasswordAuthentication no 00:04:51.720 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:51.720 IdentitiesOnly yes 00:04:51.720 LogLevel FATAL 00:04:51.720 ForwardAgent yes 00:04:51.720 ForwardX11 yes 00:04:51.720 00:04:51.734 [Pipeline] withEnv 00:04:51.736 [Pipeline] { 00:04:51.749 [Pipeline] sh 00:04:52.028 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:52.028 source /etc/os-release 00:04:52.028 [[ -e /image.version ]] && img=$(< /image.version) 00:04:52.028 # Minimal, systemd-like check. 00:04:52.028 if [[ -e /.dockerenv ]]; then 00:04:52.028 # Clear garbage from the node's name: 00:04:52.028 # agt-er_autotest_547-896 -> autotest_547-896 00:04:52.028 # $HOSTNAME is the actual container id 00:04:52.028 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:52.028 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:52.028 # We can assume this is a mount from a host where container is running, 00:04:52.028 # so fetch its hostname to easily identify the target swarm worker. 00:04:52.028 container="$(< /etc/hostname) ($agent)" 00:04:52.028 else 00:04:52.028 # Fallback 00:04:52.028 container=$agent 00:04:52.028 fi 00:04:52.028 fi 00:04:52.028 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:52.028 00:04:52.040 [Pipeline] } 00:04:52.056 [Pipeline] // withEnv 00:04:52.065 [Pipeline] setCustomBuildProperty 00:04:52.081 [Pipeline] stage 00:04:52.084 [Pipeline] { (Tests) 00:04:52.102 [Pipeline] sh 00:04:52.380 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:52.656 [Pipeline] sh 00:04:52.934 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:53.204 [Pipeline] timeout 00:04:53.205 Timeout set to expire in 1 hr 30 min 00:04:53.206 [Pipeline] { 00:04:53.220 [Pipeline] sh 00:04:53.497 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:54.063 HEAD is now at c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:04:54.077 [Pipeline] sh 00:04:54.357 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:54.630 [Pipeline] sh 00:04:54.910 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:54.927 [Pipeline] sh 00:04:55.217 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:55.217 ++ readlink -f spdk_repo 00:04:55.217 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:55.217 + [[ -n /home/vagrant/spdk_repo ]] 00:04:55.217 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:55.217 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:55.217 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:55.217 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:55.217 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:55.217 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:55.217 + cd /home/vagrant/spdk_repo 00:04:55.217 + source /etc/os-release 00:04:55.217 ++ NAME='Fedora Linux' 00:04:55.217 ++ VERSION='39 (Cloud Edition)' 00:04:55.217 ++ ID=fedora 00:04:55.217 ++ VERSION_ID=39 00:04:55.217 ++ VERSION_CODENAME= 00:04:55.217 ++ PLATFORM_ID=platform:f39 00:04:55.217 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:55.217 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:55.217 ++ LOGO=fedora-logo-icon 00:04:55.217 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:55.217 ++ HOME_URL=https://fedoraproject.org/ 00:04:55.217 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:55.217 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:55.217 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:55.217 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:55.217 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:55.217 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:55.217 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:55.217 ++ SUPPORT_END=2024-11-12 00:04:55.217 ++ VARIANT='Cloud Edition' 00:04:55.217 ++ VARIANT_ID=cloud 00:04:55.217 + uname -a 00:04:55.217 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:55.217 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.838 Hugepages 00:04:55.838 node hugesize free / total 00:04:55.838 node0 1048576kB 0 / 0 00:04:55.838 node0 2048kB 0 / 0 00:04:55.838 00:04:55.838 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.838 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:55.838 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:55.838 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:55.838 + rm -f /tmp/spdk-ld-path 00:04:55.838 + source autorun-spdk.conf 00:04:55.838 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:55.838 ++ SPDK_RUN_ASAN=1 00:04:55.838 ++ SPDK_RUN_UBSAN=1 00:04:55.838 ++ SPDK_TEST_RAID=1 00:04:55.838 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:55.838 ++ RUN_NIGHTLY=0 00:04:55.838 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:55.838 + [[ -n '' ]] 00:04:55.838 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:55.838 + for M in /var/spdk/build-*-manifest.txt 00:04:55.838 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:55.838 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:55.838 + for M in /var/spdk/build-*-manifest.txt 00:04:55.838 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:55.838 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:55.838 + for M in /var/spdk/build-*-manifest.txt 00:04:55.838 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:55.838 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:55.838 ++ uname 00:04:55.838 + [[ Linux == \L\i\n\u\x ]] 00:04:55.838 + sudo dmesg -T 00:04:56.098 + sudo dmesg --clear 00:04:56.098 + dmesg_pid=5207 00:04:56.098 + sudo dmesg -Tw 00:04:56.098 + [[ Fedora Linux == FreeBSD ]] 00:04:56.098 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:56.098 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:56.098 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:56.098 + [[ -x /usr/src/fio-static/fio ]] 00:04:56.098 + export FIO_BIN=/usr/src/fio-static/fio 00:04:56.098 + FIO_BIN=/usr/src/fio-static/fio 00:04:56.098 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:56.098 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:56.099 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:56.099 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:56.099 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:56.099 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:56.099 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:56.099 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:56.099 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:56.099 11:17:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:56.099 11:17:03 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:56.099 11:17:03 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:56.099 11:17:03 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:56.099 11:17:03 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:56.099 11:17:03 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:56.099 11:17:03 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:56.099 11:17:03 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:56.099 11:17:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:56.099 11:17:03 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:56.099 11:17:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:56.099 11:17:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.099 11:17:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:56.099 11:17:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:56.099 11:17:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.099 11:17:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.099 11:17:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.099 11:17:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.099 11:17:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.099 11:17:03 -- paths/export.sh@5 -- $ export PATH 00:04:56.099 11:17:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.099 11:17:03 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:56.099 11:17:03 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:56.099 11:17:03 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101423.XXXXXX 00:04:56.099 11:17:03 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101423.pkwOKI 00:04:56.099 11:17:03 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:56.099 11:17:03 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:56.099 11:17:03 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:56.099 11:17:03 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:56.099 11:17:03 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:56.099 11:17:03 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:56.099 11:17:03 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:56.099 11:17:03 -- common/autotest_common.sh@10 -- $ set +x 00:04:56.099 11:17:03 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:56.099 11:17:03 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:56.099 11:17:03 -- pm/common@17 -- $ local monitor 00:04:56.099 11:17:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.099 11:17:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.099 11:17:03 -- pm/common@25 -- $ sleep 1 00:04:56.099 11:17:03 -- pm/common@21 -- $ date +%s 00:04:56.099 11:17:03 -- pm/common@21 -- $ date +%s 00:04:56.099 11:17:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101423 00:04:56.099 11:17:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101423 00:04:56.099 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101423_collect-vmstat.pm.log 00:04:56.099 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101423_collect-cpu-load.pm.log 00:04:57.476 11:17:04 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:57.476 11:17:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:57.476 11:17:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:57.476 11:17:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:57.476 11:17:04 -- spdk/autobuild.sh@16 -- $ date -u 00:04:57.476 Wed Nov 20 11:17:04 AM UTC 2024 00:04:57.476 11:17:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:57.476 v25.01-pre-218-gc0b2ac5c9 00:04:57.476 11:17:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:57.476 11:17:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:57.476 11:17:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:57.476 11:17:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:57.476 11:17:04 -- common/autotest_common.sh@10 -- $ set +x 00:04:57.476 ************************************ 00:04:57.476 START TEST asan 00:04:57.476 ************************************ 00:04:57.476 using asan 00:04:57.476 11:17:04 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:57.476 00:04:57.476 real 0m0.000s 00:04:57.476 user 0m0.000s 00:04:57.476 sys 0m0.000s 00:04:57.476 11:17:04 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:57.476 11:17:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:57.476 ************************************ 00:04:57.476 END TEST asan 00:04:57.476 ************************************ 00:04:57.476 11:17:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:57.476 11:17:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:57.476 11:17:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:57.476 11:17:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:57.476 11:17:04 -- common/autotest_common.sh@10 -- $ set +x 00:04:57.476 ************************************ 00:04:57.476 START TEST ubsan 00:04:57.476 ************************************ 00:04:57.476 using ubsan 00:04:57.476 11:17:04 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:57.476 00:04:57.476 real 0m0.000s 00:04:57.476 user 0m0.000s 00:04:57.476 sys 0m0.000s 00:04:57.476 11:17:04 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:57.476 11:17:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:57.476 ************************************ 00:04:57.476 END TEST ubsan 00:04:57.476 ************************************ 00:04:57.476 11:17:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:57.476 11:17:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:57.476 11:17:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:57.476 11:17:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:57.476 11:17:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:57.476 11:17:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:57.476 11:17:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:57.476 11:17:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:57.476 11:17:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:57.476 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:57.476 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.735 Using 'verbs' RDMA provider 00:05:13.546 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:25.750 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:25.750 Creating mk/config.mk...done. 00:05:25.750 Creating mk/cc.flags.mk...done. 00:05:25.750 Type 'make' to build. 00:05:25.750 11:17:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:25.750 11:17:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:25.750 11:17:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:25.750 11:17:32 -- common/autotest_common.sh@10 -- $ set +x 00:05:25.750 ************************************ 00:05:25.750 START TEST make 00:05:25.750 ************************************ 00:05:25.750 11:17:32 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:25.750 make[1]: Nothing to be done for 'all'. 00:05:37.995 The Meson build system 00:05:37.995 Version: 1.5.0 00:05:37.995 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:37.995 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:37.995 Build type: native build 00:05:37.995 Program cat found: YES (/usr/bin/cat) 00:05:37.995 Project name: DPDK 00:05:37.995 Project version: 24.03.0 00:05:37.995 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:37.995 C linker for the host machine: cc ld.bfd 2.40-14 00:05:37.995 Host machine cpu family: x86_64 00:05:37.995 Host machine cpu: x86_64 00:05:37.995 Message: ## Building in Developer Mode ## 00:05:37.995 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:37.995 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:37.995 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:37.995 Program python3 found: YES (/usr/bin/python3) 00:05:37.995 Program cat found: YES (/usr/bin/cat) 00:05:37.995 Compiler for C supports arguments -march=native: YES 00:05:37.996 Checking for size of "void *" : 8 00:05:37.996 Checking for size of "void *" : 8 (cached) 00:05:37.996 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:37.996 Library m found: YES 00:05:37.996 Library numa found: YES 00:05:37.996 Has header "numaif.h" : YES 00:05:37.996 Library fdt found: NO 00:05:37.996 Library execinfo found: NO 00:05:37.996 Has header "execinfo.h" : YES 00:05:37.996 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:37.996 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:37.996 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:37.996 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:37.996 Run-time dependency openssl found: YES 3.1.1 00:05:37.996 Run-time dependency libpcap found: YES 1.10.4 00:05:37.996 Has header "pcap.h" with dependency libpcap: YES 00:05:37.996 Compiler for C supports arguments -Wcast-qual: YES 00:05:37.996 Compiler for C supports arguments -Wdeprecated: YES 00:05:37.996 Compiler for C supports arguments -Wformat: YES 00:05:37.996 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:37.996 Compiler for C supports arguments -Wformat-security: NO 00:05:37.996 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:37.996 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:37.996 Compiler for C supports arguments -Wnested-externs: YES 00:05:37.996 Compiler for C supports arguments -Wold-style-definition: YES 00:05:37.996 Compiler for C supports arguments -Wpointer-arith: YES 00:05:37.996 Compiler for C supports arguments -Wsign-compare: YES 00:05:37.996 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:37.996 Compiler for C supports arguments -Wundef: YES 00:05:37.996 Compiler for C supports arguments -Wwrite-strings: YES 00:05:37.996 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:37.996 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:37.996 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:37.996 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:37.996 Program objdump found: YES (/usr/bin/objdump) 00:05:37.996 Compiler for C supports arguments -mavx512f: YES 00:05:37.996 Checking if "AVX512 checking" compiles: YES 00:05:37.996 Fetching value of define "__SSE4_2__" : 1 00:05:37.996 Fetching value of define "__AES__" : 1 00:05:37.996 Fetching value of define "__AVX__" : 1 00:05:37.996 Fetching value of define "__AVX2__" : 1 00:05:37.996 Fetching value of define "__AVX512BW__" : (undefined) 00:05:37.996 Fetching value of define "__AVX512CD__" : (undefined) 00:05:37.996 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:37.996 Fetching value of define "__AVX512F__" : (undefined) 00:05:37.996 Fetching value of define "__AVX512VL__" : (undefined) 00:05:37.996 Fetching value of define "__PCLMUL__" : 1 00:05:37.996 Fetching value of define "__RDRND__" : 1 00:05:37.996 Fetching value of define "__RDSEED__" : 1 00:05:37.996 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:37.996 Fetching value of define "__znver1__" : (undefined) 00:05:37.996 Fetching value of define "__znver2__" : (undefined) 00:05:37.996 Fetching value of define "__znver3__" : (undefined) 00:05:37.996 Fetching value of define "__znver4__" : (undefined) 00:05:37.996 Library asan found: YES 00:05:37.996 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:37.996 Message: lib/log: Defining dependency "log" 00:05:37.996 Message: lib/kvargs: Defining dependency "kvargs" 00:05:37.996 Message: lib/telemetry: Defining dependency "telemetry" 00:05:37.996 Library rt found: YES 00:05:37.996 Checking for function "getentropy" : NO 00:05:37.996 Message: lib/eal: Defining dependency "eal" 00:05:37.996 Message: lib/ring: Defining dependency "ring" 00:05:37.996 Message: lib/rcu: Defining dependency "rcu" 00:05:37.996 Message: lib/mempool: Defining dependency "mempool" 00:05:37.996 Message: lib/mbuf: Defining dependency "mbuf" 00:05:37.996 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:37.996 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:37.996 Compiler for C supports arguments -mpclmul: YES 00:05:37.996 Compiler for C supports arguments -maes: YES 00:05:37.996 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:37.996 Compiler for C supports arguments -mavx512bw: YES 00:05:37.996 Compiler for C supports arguments -mavx512dq: YES 00:05:37.996 Compiler for C supports arguments -mavx512vl: YES 00:05:37.996 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:37.996 Compiler for C supports arguments -mavx2: YES 00:05:37.996 Compiler for C supports arguments -mavx: YES 00:05:37.996 Message: lib/net: Defining dependency "net" 00:05:37.996 Message: lib/meter: Defining dependency "meter" 00:05:37.996 Message: lib/ethdev: Defining dependency "ethdev" 00:05:37.996 Message: lib/pci: Defining dependency "pci" 00:05:37.996 Message: lib/cmdline: Defining dependency "cmdline" 00:05:37.996 Message: lib/hash: Defining dependency "hash" 00:05:37.996 Message: lib/timer: Defining dependency "timer" 00:05:37.996 Message: lib/compressdev: Defining dependency "compressdev" 00:05:37.996 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:37.996 Message: lib/dmadev: Defining dependency "dmadev" 00:05:37.996 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:37.996 Message: lib/power: Defining dependency "power" 00:05:37.996 Message: lib/reorder: Defining dependency "reorder" 00:05:37.996 Message: lib/security: Defining dependency "security" 00:05:37.996 Has header "linux/userfaultfd.h" : YES 00:05:37.996 Has header "linux/vduse.h" : YES 00:05:37.996 Message: lib/vhost: Defining dependency "vhost" 00:05:37.996 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:37.996 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:37.996 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:37.996 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:37.996 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:37.996 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:37.996 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:37.996 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:37.996 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:37.996 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:37.996 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:37.996 Configuring doxy-api-html.conf using configuration 00:05:37.996 Configuring doxy-api-man.conf using configuration 00:05:37.996 Program mandb found: YES (/usr/bin/mandb) 00:05:37.996 Program sphinx-build found: NO 00:05:37.996 Configuring rte_build_config.h using configuration 00:05:37.996 Message: 00:05:37.996 ================= 00:05:37.996 Applications Enabled 00:05:37.996 ================= 00:05:37.996 00:05:37.996 apps: 00:05:37.996 00:05:37.996 00:05:37.996 Message: 00:05:37.996 ================= 00:05:37.996 Libraries Enabled 00:05:37.996 ================= 00:05:37.996 00:05:37.996 libs: 00:05:37.996 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:37.996 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:37.996 cryptodev, dmadev, power, reorder, security, vhost, 00:05:37.996 00:05:37.996 Message: 00:05:37.996 =============== 00:05:37.996 Drivers Enabled 00:05:37.996 =============== 00:05:37.996 00:05:37.996 common: 00:05:37.996 00:05:37.996 bus: 00:05:37.996 pci, vdev, 00:05:37.996 mempool: 00:05:37.996 ring, 00:05:37.996 dma: 00:05:37.996 00:05:37.996 net: 00:05:37.996 00:05:37.996 crypto: 00:05:37.996 00:05:37.996 compress: 00:05:37.996 00:05:37.996 vdpa: 00:05:37.996 00:05:37.996 00:05:37.996 Message: 00:05:37.996 ================= 00:05:37.996 Content Skipped 00:05:37.996 ================= 00:05:37.996 00:05:37.996 apps: 00:05:37.996 dumpcap: explicitly disabled via build config 00:05:37.996 graph: explicitly disabled via build config 00:05:37.996 pdump: explicitly disabled via build config 00:05:37.996 proc-info: explicitly disabled via build config 00:05:37.996 test-acl: explicitly disabled via build config 00:05:37.996 test-bbdev: explicitly disabled via build config 00:05:37.996 test-cmdline: explicitly disabled via build config 00:05:37.996 test-compress-perf: explicitly disabled via build config 00:05:37.996 test-crypto-perf: explicitly disabled via build config 00:05:37.996 test-dma-perf: explicitly disabled via build config 00:05:37.996 test-eventdev: explicitly disabled via build config 00:05:37.996 test-fib: explicitly disabled via build config 00:05:37.996 test-flow-perf: explicitly disabled via build config 00:05:37.996 test-gpudev: explicitly disabled via build config 00:05:37.996 test-mldev: explicitly disabled via build config 00:05:37.996 test-pipeline: explicitly disabled via build config 00:05:37.996 test-pmd: explicitly disabled via build config 00:05:37.996 test-regex: explicitly disabled via build config 00:05:37.996 test-sad: explicitly disabled via build config 00:05:37.996 test-security-perf: explicitly disabled via build config 00:05:37.996 00:05:37.996 libs: 00:05:37.996 argparse: explicitly disabled via build config 00:05:37.996 metrics: explicitly disabled via build config 00:05:37.996 acl: explicitly disabled via build config 00:05:37.996 bbdev: explicitly disabled via build config 00:05:37.996 bitratestats: explicitly disabled via build config 00:05:37.996 bpf: explicitly disabled via build config 00:05:37.996 cfgfile: explicitly disabled via build config 00:05:37.996 distributor: explicitly disabled via build config 00:05:37.996 efd: explicitly disabled via build config 00:05:37.996 eventdev: explicitly disabled via build config 00:05:37.996 dispatcher: explicitly disabled via build config 00:05:37.996 gpudev: explicitly disabled via build config 00:05:37.996 gro: explicitly disabled via build config 00:05:37.996 gso: explicitly disabled via build config 00:05:37.996 ip_frag: explicitly disabled via build config 00:05:37.996 jobstats: explicitly disabled via build config 00:05:37.996 latencystats: explicitly disabled via build config 00:05:37.996 lpm: explicitly disabled via build config 00:05:37.996 member: explicitly disabled via build config 00:05:37.997 pcapng: explicitly disabled via build config 00:05:37.997 rawdev: explicitly disabled via build config 00:05:37.997 regexdev: explicitly disabled via build config 00:05:37.997 mldev: explicitly disabled via build config 00:05:37.997 rib: explicitly disabled via build config 00:05:37.997 sched: explicitly disabled via build config 00:05:37.997 stack: explicitly disabled via build config 00:05:37.997 ipsec: explicitly disabled via build config 00:05:37.997 pdcp: explicitly disabled via build config 00:05:37.997 fib: explicitly disabled via build config 00:05:37.997 port: explicitly disabled via build config 00:05:37.997 pdump: explicitly disabled via build config 00:05:37.997 table: explicitly disabled via build config 00:05:37.997 pipeline: explicitly disabled via build config 00:05:37.997 graph: explicitly disabled via build config 00:05:37.997 node: explicitly disabled via build config 00:05:37.997 00:05:37.997 drivers: 00:05:37.997 common/cpt: not in enabled drivers build config 00:05:37.997 common/dpaax: not in enabled drivers build config 00:05:37.997 common/iavf: not in enabled drivers build config 00:05:37.997 common/idpf: not in enabled drivers build config 00:05:37.997 common/ionic: not in enabled drivers build config 00:05:37.997 common/mvep: not in enabled drivers build config 00:05:37.997 common/octeontx: not in enabled drivers build config 00:05:37.997 bus/auxiliary: not in enabled drivers build config 00:05:37.997 bus/cdx: not in enabled drivers build config 00:05:37.997 bus/dpaa: not in enabled drivers build config 00:05:37.997 bus/fslmc: not in enabled drivers build config 00:05:37.997 bus/ifpga: not in enabled drivers build config 00:05:37.997 bus/platform: not in enabled drivers build config 00:05:37.997 bus/uacce: not in enabled drivers build config 00:05:37.997 bus/vmbus: not in enabled drivers build config 00:05:37.997 common/cnxk: not in enabled drivers build config 00:05:37.997 common/mlx5: not in enabled drivers build config 00:05:37.997 common/nfp: not in enabled drivers build config 00:05:37.997 common/nitrox: not in enabled drivers build config 00:05:37.997 common/qat: not in enabled drivers build config 00:05:37.997 common/sfc_efx: not in enabled drivers build config 00:05:37.997 mempool/bucket: not in enabled drivers build config 00:05:37.997 mempool/cnxk: not in enabled drivers build config 00:05:37.997 mempool/dpaa: not in enabled drivers build config 00:05:37.997 mempool/dpaa2: not in enabled drivers build config 00:05:37.997 mempool/octeontx: not in enabled drivers build config 00:05:37.997 mempool/stack: not in enabled drivers build config 00:05:37.997 dma/cnxk: not in enabled drivers build config 00:05:37.997 dma/dpaa: not in enabled drivers build config 00:05:37.997 dma/dpaa2: not in enabled drivers build config 00:05:37.997 dma/hisilicon: not in enabled drivers build config 00:05:37.997 dma/idxd: not in enabled drivers build config 00:05:37.997 dma/ioat: not in enabled drivers build config 00:05:37.997 dma/skeleton: not in enabled drivers build config 00:05:37.997 net/af_packet: not in enabled drivers build config 00:05:37.997 net/af_xdp: not in enabled drivers build config 00:05:37.997 net/ark: not in enabled drivers build config 00:05:37.997 net/atlantic: not in enabled drivers build config 00:05:37.997 net/avp: not in enabled drivers build config 00:05:37.997 net/axgbe: not in enabled drivers build config 00:05:37.997 net/bnx2x: not in enabled drivers build config 00:05:37.997 net/bnxt: not in enabled drivers build config 00:05:37.997 net/bonding: not in enabled drivers build config 00:05:37.997 net/cnxk: not in enabled drivers build config 00:05:37.997 net/cpfl: not in enabled drivers build config 00:05:37.997 net/cxgbe: not in enabled drivers build config 00:05:37.997 net/dpaa: not in enabled drivers build config 00:05:37.997 net/dpaa2: not in enabled drivers build config 00:05:37.997 net/e1000: not in enabled drivers build config 00:05:37.997 net/ena: not in enabled drivers build config 00:05:37.997 net/enetc: not in enabled drivers build config 00:05:37.997 net/enetfec: not in enabled drivers build config 00:05:37.997 net/enic: not in enabled drivers build config 00:05:37.997 net/failsafe: not in enabled drivers build config 00:05:37.997 net/fm10k: not in enabled drivers build config 00:05:37.997 net/gve: not in enabled drivers build config 00:05:37.997 net/hinic: not in enabled drivers build config 00:05:37.997 net/hns3: not in enabled drivers build config 00:05:37.997 net/i40e: not in enabled drivers build config 00:05:37.997 net/iavf: not in enabled drivers build config 00:05:37.997 net/ice: not in enabled drivers build config 00:05:37.997 net/idpf: not in enabled drivers build config 00:05:37.997 net/igc: not in enabled drivers build config 00:05:37.997 net/ionic: not in enabled drivers build config 00:05:37.997 net/ipn3ke: not in enabled drivers build config 00:05:37.997 net/ixgbe: not in enabled drivers build config 00:05:37.997 net/mana: not in enabled drivers build config 00:05:37.997 net/memif: not in enabled drivers build config 00:05:37.997 net/mlx4: not in enabled drivers build config 00:05:37.997 net/mlx5: not in enabled drivers build config 00:05:37.997 net/mvneta: not in enabled drivers build config 00:05:37.997 net/mvpp2: not in enabled drivers build config 00:05:37.997 net/netvsc: not in enabled drivers build config 00:05:37.997 net/nfb: not in enabled drivers build config 00:05:37.997 net/nfp: not in enabled drivers build config 00:05:37.997 net/ngbe: not in enabled drivers build config 00:05:37.997 net/null: not in enabled drivers build config 00:05:37.997 net/octeontx: not in enabled drivers build config 00:05:37.997 net/octeon_ep: not in enabled drivers build config 00:05:37.997 net/pcap: not in enabled drivers build config 00:05:37.997 net/pfe: not in enabled drivers build config 00:05:37.997 net/qede: not in enabled drivers build config 00:05:37.997 net/ring: not in enabled drivers build config 00:05:37.997 net/sfc: not in enabled drivers build config 00:05:37.997 net/softnic: not in enabled drivers build config 00:05:37.997 net/tap: not in enabled drivers build config 00:05:37.997 net/thunderx: not in enabled drivers build config 00:05:37.997 net/txgbe: not in enabled drivers build config 00:05:37.997 net/vdev_netvsc: not in enabled drivers build config 00:05:37.997 net/vhost: not in enabled drivers build config 00:05:37.997 net/virtio: not in enabled drivers build config 00:05:37.997 net/vmxnet3: not in enabled drivers build config 00:05:37.997 raw/*: missing internal dependency, "rawdev" 00:05:37.997 crypto/armv8: not in enabled drivers build config 00:05:37.997 crypto/bcmfs: not in enabled drivers build config 00:05:37.997 crypto/caam_jr: not in enabled drivers build config 00:05:37.997 crypto/ccp: not in enabled drivers build config 00:05:37.997 crypto/cnxk: not in enabled drivers build config 00:05:37.997 crypto/dpaa_sec: not in enabled drivers build config 00:05:37.997 crypto/dpaa2_sec: not in enabled drivers build config 00:05:37.997 crypto/ipsec_mb: not in enabled drivers build config 00:05:37.997 crypto/mlx5: not in enabled drivers build config 00:05:37.997 crypto/mvsam: not in enabled drivers build config 00:05:37.997 crypto/nitrox: not in enabled drivers build config 00:05:37.997 crypto/null: not in enabled drivers build config 00:05:37.997 crypto/octeontx: not in enabled drivers build config 00:05:37.997 crypto/openssl: not in enabled drivers build config 00:05:37.997 crypto/scheduler: not in enabled drivers build config 00:05:37.997 crypto/uadk: not in enabled drivers build config 00:05:37.997 crypto/virtio: not in enabled drivers build config 00:05:37.997 compress/isal: not in enabled drivers build config 00:05:37.997 compress/mlx5: not in enabled drivers build config 00:05:37.997 compress/nitrox: not in enabled drivers build config 00:05:37.997 compress/octeontx: not in enabled drivers build config 00:05:37.997 compress/zlib: not in enabled drivers build config 00:05:37.997 regex/*: missing internal dependency, "regexdev" 00:05:37.997 ml/*: missing internal dependency, "mldev" 00:05:37.997 vdpa/ifc: not in enabled drivers build config 00:05:37.997 vdpa/mlx5: not in enabled drivers build config 00:05:37.997 vdpa/nfp: not in enabled drivers build config 00:05:37.997 vdpa/sfc: not in enabled drivers build config 00:05:37.997 event/*: missing internal dependency, "eventdev" 00:05:37.997 baseband/*: missing internal dependency, "bbdev" 00:05:37.997 gpu/*: missing internal dependency, "gpudev" 00:05:37.997 00:05:37.997 00:05:37.997 Build targets in project: 85 00:05:37.997 00:05:37.997 DPDK 24.03.0 00:05:37.997 00:05:37.997 User defined options 00:05:37.997 buildtype : debug 00:05:37.997 default_library : shared 00:05:37.997 libdir : lib 00:05:37.997 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:37.997 b_sanitize : address 00:05:37.997 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:37.997 c_link_args : 00:05:37.997 cpu_instruction_set: native 00:05:37.997 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:37.997 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:37.997 enable_docs : false 00:05:37.997 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:37.997 enable_kmods : false 00:05:37.997 max_lcores : 128 00:05:37.997 tests : false 00:05:37.997 00:05:37.997 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:37.997 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:37.997 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:37.997 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:37.997 [3/268] Linking static target lib/librte_kvargs.a 00:05:37.997 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:37.997 [5/268] Linking static target lib/librte_log.a 00:05:37.997 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:38.256 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.256 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:38.256 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:38.516 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:38.516 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:38.516 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:38.516 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:38.516 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:38.516 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:38.516 [16/268] Linking static target lib/librte_telemetry.a 00:05:38.775 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:38.775 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:38.775 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:38.775 [20/268] Linking target lib/librte_log.so.24.1 00:05:39.035 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:39.035 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:39.035 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:39.295 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:39.295 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:39.295 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:39.295 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:39.555 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.555 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:39.555 [30/268] Linking target lib/librte_telemetry.so.24.1 00:05:39.555 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:39.814 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:39.814 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:39.814 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:39.814 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:40.072 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:40.072 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:40.072 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:40.330 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:40.330 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:40.330 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:40.330 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:40.330 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:40.588 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:40.846 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:40.846 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:40.846 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:40.846 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:41.104 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:41.104 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:41.104 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:41.104 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:41.362 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:41.362 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:41.621 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:41.621 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:41.879 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:41.879 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:41.879 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:42.136 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:42.136 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:42.136 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:42.136 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:42.136 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:42.393 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:42.651 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:42.651 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:42.910 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:42.910 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:42.910 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:43.169 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:43.169 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:43.169 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:43.169 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:43.169 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:43.169 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:43.169 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:43.428 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:43.428 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:43.687 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:43.687 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:43.687 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:43.687 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:43.944 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:43.944 [85/268] Linking static target lib/librte_eal.a 00:05:44.202 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:44.202 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:44.202 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:44.202 [89/268] Linking static target lib/librte_rcu.a 00:05:44.202 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:44.202 [91/268] Linking static target lib/librte_ring.a 00:05:44.461 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:44.461 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:44.720 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:44.720 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:44.720 [96/268] Linking static target lib/librte_mempool.a 00:05:44.720 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.978 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.978 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:44.978 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:44.978 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:44.978 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:44.978 [103/268] Linking static target lib/librte_mbuf.a 00:05:44.978 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:45.237 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:45.496 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:45.496 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:45.496 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:45.496 [109/268] Linking static target lib/librte_meter.a 00:05:45.496 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:45.753 [111/268] Linking static target lib/librte_net.a 00:05:45.753 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:45.753 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:46.011 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.011 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.011 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.011 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.011 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:46.269 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:46.528 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:46.786 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:47.045 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:47.045 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:47.045 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:47.303 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:47.303 [126/268] Linking static target lib/librte_pci.a 00:05:47.303 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:47.581 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:47.581 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:47.581 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:47.581 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:47.847 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:47.847 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.847 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:47.847 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:47.847 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:47.847 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:47.847 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:47.847 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:47.847 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:47.847 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:48.104 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:48.104 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:48.104 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:48.104 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:48.363 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:48.363 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:48.363 [148/268] Linking static target lib/librte_cmdline.a 00:05:48.620 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:48.621 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:48.879 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:48.879 [152/268] Linking static target lib/librte_timer.a 00:05:48.879 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:48.879 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:48.879 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:49.137 [156/268] Linking static target lib/librte_ethdev.a 00:05:49.137 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:49.395 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:49.654 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.654 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:49.912 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:49.912 [162/268] Linking static target lib/librte_compressdev.a 00:05:49.912 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:49.912 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:49.912 [165/268] Linking static target lib/librte_hash.a 00:05:49.912 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:50.171 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:50.171 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:50.171 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.171 [170/268] Linking static target lib/librte_dmadev.a 00:05:50.429 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:50.429 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:50.688 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:50.688 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:50.947 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.947 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:50.947 [177/268] Linking static target lib/librte_cryptodev.a 00:05:50.947 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:51.207 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.207 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.207 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:51.466 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:51.466 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:51.466 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:51.724 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:51.982 [186/268] Linking static target lib/librte_power.a 00:05:51.982 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:51.982 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:51.982 [189/268] Linking static target lib/librte_reorder.a 00:05:52.240 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:52.240 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:52.240 [192/268] Linking static target lib/librte_security.a 00:05:52.498 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:52.498 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.757 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:53.016 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.016 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.276 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:53.276 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:53.534 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:53.535 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:53.535 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.793 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:53.793 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:54.051 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:54.051 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:54.309 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:54.567 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:54.567 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:54.567 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:54.567 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:54.827 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:54.827 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:54.827 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:54.827 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:54.827 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:54.827 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:54.827 [218/268] Linking static target drivers/librte_bus_vdev.a 00:05:54.827 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:54.827 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:54.827 [221/268] Linking static target drivers/librte_bus_pci.a 00:05:55.093 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:55.093 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:55.093 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:55.093 [225/268] Linking static target drivers/librte_mempool_ring.a 00:05:55.093 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.352 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.917 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:56.175 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.175 [230/268] Linking target lib/librte_eal.so.24.1 00:05:56.434 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:56.434 [232/268] Linking target lib/librte_pci.so.24.1 00:05:56.434 [233/268] Linking target lib/librte_ring.so.24.1 00:05:56.434 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:56.434 [235/268] Linking target lib/librte_dmadev.so.24.1 00:05:56.434 [236/268] Linking target lib/librte_meter.so.24.1 00:05:56.693 [237/268] Linking target lib/librte_timer.so.24.1 00:05:56.693 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:56.693 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:56.693 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:56.693 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:56.693 [242/268] Linking target lib/librte_rcu.so.24.1 00:05:56.693 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:56.693 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:56.693 [245/268] Linking target lib/librte_mempool.so.24.1 00:05:56.950 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:56.950 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:56.950 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:56.950 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:57.209 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:57.209 [251/268] Linking target lib/librte_reorder.so.24.1 00:05:57.209 [252/268] Linking target lib/librte_net.so.24.1 00:05:57.209 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:05:57.209 [254/268] Linking target lib/librte_compressdev.so.24.1 00:05:57.209 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:57.209 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:57.467 [257/268] Linking target lib/librte_cmdline.so.24.1 00:05:57.467 [258/268] Linking target lib/librte_hash.so.24.1 00:05:57.467 [259/268] Linking target lib/librte_security.so.24.1 00:05:57.467 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:57.726 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.726 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:57.984 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:57.984 [264/268] Linking target lib/librte_power.so.24.1 00:06:01.339 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:01.339 [266/268] Linking static target lib/librte_vhost.a 00:06:02.714 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.714 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:02.714 INFO: autodetecting backend as ninja 00:06:02.714 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:24.636 CC lib/log/log_flags.o 00:06:24.636 CC lib/log/log.o 00:06:24.636 CC lib/log/log_deprecated.o 00:06:24.636 CC lib/ut_mock/mock.o 00:06:24.636 CC lib/ut/ut.o 00:06:24.636 LIB libspdk_ut.a 00:06:24.636 SO libspdk_ut.so.2.0 00:06:24.636 LIB libspdk_log.a 00:06:24.636 LIB libspdk_ut_mock.a 00:06:24.636 SO libspdk_log.so.7.1 00:06:24.636 SO libspdk_ut_mock.so.6.0 00:06:24.636 SYMLINK libspdk_ut.so 00:06:24.636 SYMLINK libspdk_ut_mock.so 00:06:24.636 SYMLINK libspdk_log.so 00:06:24.636 CXX lib/trace_parser/trace.o 00:06:24.636 CC lib/ioat/ioat.o 00:06:24.636 CC lib/util/base64.o 00:06:24.636 CC lib/util/bit_array.o 00:06:24.636 CC lib/util/cpuset.o 00:06:24.636 CC lib/util/crc32.o 00:06:24.636 CC lib/util/crc16.o 00:06:24.636 CC lib/util/crc32c.o 00:06:24.636 CC lib/dma/dma.o 00:06:24.636 CC lib/vfio_user/host/vfio_user_pci.o 00:06:24.636 CC lib/vfio_user/host/vfio_user.o 00:06:24.636 CC lib/util/crc32_ieee.o 00:06:24.636 CC lib/util/crc64.o 00:06:24.636 CC lib/util/dif.o 00:06:24.636 CC lib/util/fd.o 00:06:24.895 LIB libspdk_dma.a 00:06:24.895 CC lib/util/fd_group.o 00:06:24.895 LIB libspdk_ioat.a 00:06:24.895 SO libspdk_dma.so.5.0 00:06:24.895 CC lib/util/file.o 00:06:24.895 SO libspdk_ioat.so.7.0 00:06:24.895 CC lib/util/hexlify.o 00:06:24.895 CC lib/util/iov.o 00:06:24.895 SYMLINK libspdk_dma.so 00:06:24.895 LIB libspdk_vfio_user.a 00:06:24.895 CC lib/util/math.o 00:06:24.895 CC lib/util/net.o 00:06:24.895 SO libspdk_vfio_user.so.5.0 00:06:24.895 SYMLINK libspdk_ioat.so 00:06:24.895 CC lib/util/pipe.o 00:06:24.895 SYMLINK libspdk_vfio_user.so 00:06:24.895 CC lib/util/strerror_tls.o 00:06:24.895 CC lib/util/string.o 00:06:25.154 CC lib/util/uuid.o 00:06:25.154 CC lib/util/xor.o 00:06:25.154 CC lib/util/zipf.o 00:06:25.154 CC lib/util/md5.o 00:06:25.411 LIB libspdk_util.a 00:06:25.669 SO libspdk_util.so.10.1 00:06:25.669 LIB libspdk_trace_parser.a 00:06:25.669 SO libspdk_trace_parser.so.6.0 00:06:25.669 SYMLINK libspdk_util.so 00:06:25.928 SYMLINK libspdk_trace_parser.so 00:06:25.928 CC lib/idxd/idxd.o 00:06:25.928 CC lib/idxd/idxd_user.o 00:06:25.928 CC lib/idxd/idxd_kernel.o 00:06:25.928 CC lib/rdma_utils/rdma_utils.o 00:06:25.928 CC lib/json/json_util.o 00:06:25.928 CC lib/json/json_parse.o 00:06:25.928 CC lib/vmd/led.o 00:06:25.928 CC lib/vmd/vmd.o 00:06:25.928 CC lib/env_dpdk/env.o 00:06:25.928 CC lib/conf/conf.o 00:06:26.186 CC lib/env_dpdk/memory.o 00:06:26.186 CC lib/env_dpdk/pci.o 00:06:26.186 CC lib/json/json_write.o 00:06:26.186 CC lib/env_dpdk/init.o 00:06:26.186 LIB libspdk_rdma_utils.a 00:06:26.444 SO libspdk_rdma_utils.so.1.0 00:06:26.444 SYMLINK libspdk_rdma_utils.so 00:06:26.444 CC lib/env_dpdk/threads.o 00:06:26.444 LIB libspdk_conf.a 00:06:26.444 CC lib/env_dpdk/pci_ioat.o 00:06:26.444 SO libspdk_conf.so.6.0 00:06:26.444 SYMLINK libspdk_conf.so 00:06:26.444 CC lib/env_dpdk/pci_virtio.o 00:06:26.444 CC lib/env_dpdk/pci_vmd.o 00:06:26.703 LIB libspdk_json.a 00:06:26.703 SO libspdk_json.so.6.0 00:06:26.703 CC lib/env_dpdk/pci_idxd.o 00:06:26.703 CC lib/env_dpdk/pci_event.o 00:06:26.703 SYMLINK libspdk_json.so 00:06:26.703 CC lib/env_dpdk/sigbus_handler.o 00:06:26.703 LIB libspdk_idxd.a 00:06:26.703 SO libspdk_idxd.so.12.1 00:06:26.703 CC lib/env_dpdk/pci_dpdk.o 00:06:27.029 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:27.029 CC lib/rdma_provider/common.o 00:06:27.029 CC lib/jsonrpc/jsonrpc_server.o 00:06:27.029 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:27.029 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:27.029 SYMLINK libspdk_idxd.so 00:06:27.029 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:27.029 LIB libspdk_vmd.a 00:06:27.029 CC lib/jsonrpc/jsonrpc_client.o 00:06:27.029 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:27.029 SO libspdk_vmd.so.6.0 00:06:27.029 SYMLINK libspdk_vmd.so 00:06:27.290 LIB libspdk_rdma_provider.a 00:06:27.290 SO libspdk_rdma_provider.so.7.0 00:06:27.290 LIB libspdk_jsonrpc.a 00:06:27.290 SYMLINK libspdk_rdma_provider.so 00:06:27.290 SO libspdk_jsonrpc.so.6.0 00:06:27.290 SYMLINK libspdk_jsonrpc.so 00:06:27.548 CC lib/rpc/rpc.o 00:06:27.806 LIB libspdk_rpc.a 00:06:27.806 SO libspdk_rpc.so.6.0 00:06:27.806 LIB libspdk_env_dpdk.a 00:06:28.065 SYMLINK libspdk_rpc.so 00:06:28.065 SO libspdk_env_dpdk.so.15.1 00:06:28.065 SYMLINK libspdk_env_dpdk.so 00:06:28.065 CC lib/trace/trace.o 00:06:28.065 CC lib/notify/notify.o 00:06:28.065 CC lib/trace/trace_rpc.o 00:06:28.065 CC lib/notify/notify_rpc.o 00:06:28.065 CC lib/trace/trace_flags.o 00:06:28.065 CC lib/keyring/keyring.o 00:06:28.065 CC lib/keyring/keyring_rpc.o 00:06:28.323 LIB libspdk_notify.a 00:06:28.323 SO libspdk_notify.so.6.0 00:06:28.581 SYMLINK libspdk_notify.so 00:06:28.581 LIB libspdk_keyring.a 00:06:28.581 SO libspdk_keyring.so.2.0 00:06:28.581 LIB libspdk_trace.a 00:06:28.581 SO libspdk_trace.so.11.0 00:06:28.581 SYMLINK libspdk_keyring.so 00:06:28.581 SYMLINK libspdk_trace.so 00:06:28.840 CC lib/thread/thread.o 00:06:28.841 CC lib/thread/iobuf.o 00:06:28.841 CC lib/sock/sock_rpc.o 00:06:28.841 CC lib/sock/sock.o 00:06:29.409 LIB libspdk_sock.a 00:06:29.668 SO libspdk_sock.so.10.0 00:06:29.668 SYMLINK libspdk_sock.so 00:06:29.927 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:29.927 CC lib/nvme/nvme_ctrlr.o 00:06:29.927 CC lib/nvme/nvme_ns_cmd.o 00:06:29.927 CC lib/nvme/nvme_fabric.o 00:06:29.927 CC lib/nvme/nvme_ns.o 00:06:29.927 CC lib/nvme/nvme_pcie_common.o 00:06:29.927 CC lib/nvme/nvme_qpair.o 00:06:29.927 CC lib/nvme/nvme_pcie.o 00:06:29.927 CC lib/nvme/nvme.o 00:06:30.862 CC lib/nvme/nvme_quirks.o 00:06:30.862 CC lib/nvme/nvme_transport.o 00:06:30.862 LIB libspdk_thread.a 00:06:30.862 SO libspdk_thread.so.11.0 00:06:30.862 CC lib/nvme/nvme_discovery.o 00:06:30.862 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:30.862 SYMLINK libspdk_thread.so 00:06:30.862 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:31.122 CC lib/nvme/nvme_tcp.o 00:06:31.122 CC lib/nvme/nvme_opal.o 00:06:31.122 CC lib/nvme/nvme_io_msg.o 00:06:31.122 CC lib/nvme/nvme_poll_group.o 00:06:31.381 CC lib/nvme/nvme_zns.o 00:06:31.640 CC lib/nvme/nvme_stubs.o 00:06:31.640 CC lib/nvme/nvme_auth.o 00:06:31.640 CC lib/accel/accel.o 00:06:31.640 CC lib/nvme/nvme_cuse.o 00:06:31.898 CC lib/accel/accel_rpc.o 00:06:31.898 CC lib/blob/blobstore.o 00:06:31.898 CC lib/accel/accel_sw.o 00:06:32.157 CC lib/blob/request.o 00:06:32.157 CC lib/blob/zeroes.o 00:06:32.157 CC lib/blob/blob_bs_dev.o 00:06:32.416 CC lib/nvme/nvme_rdma.o 00:06:32.674 CC lib/init/json_config.o 00:06:32.674 CC lib/virtio/virtio.o 00:06:32.674 CC lib/fsdev/fsdev.o 00:06:32.674 CC lib/fsdev/fsdev_io.o 00:06:32.932 CC lib/init/subsystem.o 00:06:32.932 CC lib/fsdev/fsdev_rpc.o 00:06:32.932 CC lib/virtio/virtio_vhost_user.o 00:06:32.932 CC lib/virtio/virtio_vfio_user.o 00:06:32.932 CC lib/virtio/virtio_pci.o 00:06:33.191 CC lib/init/subsystem_rpc.o 00:06:33.191 CC lib/init/rpc.o 00:06:33.191 LIB libspdk_accel.a 00:06:33.191 SO libspdk_accel.so.16.0 00:06:33.191 LIB libspdk_init.a 00:06:33.191 SO libspdk_init.so.6.0 00:06:33.449 SYMLINK libspdk_accel.so 00:06:33.449 SYMLINK libspdk_init.so 00:06:33.449 LIB libspdk_virtio.a 00:06:33.449 SO libspdk_virtio.so.7.0 00:06:33.449 LIB libspdk_fsdev.a 00:06:33.449 SO libspdk_fsdev.so.2.0 00:06:33.707 SYMLINK libspdk_virtio.so 00:06:33.707 CC lib/bdev/bdev_rpc.o 00:06:33.707 CC lib/bdev/bdev.o 00:06:33.707 CC lib/bdev/bdev_zone.o 00:06:33.707 CC lib/bdev/part.o 00:06:33.707 CC lib/bdev/scsi_nvme.o 00:06:33.707 CC lib/event/app.o 00:06:33.707 CC lib/event/reactor.o 00:06:33.707 SYMLINK libspdk_fsdev.so 00:06:33.707 CC lib/event/log_rpc.o 00:06:33.707 CC lib/event/app_rpc.o 00:06:33.707 CC lib/event/scheduler_static.o 00:06:33.981 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:34.253 LIB libspdk_nvme.a 00:06:34.253 LIB libspdk_event.a 00:06:34.513 SO libspdk_event.so.14.0 00:06:34.513 SO libspdk_nvme.so.15.0 00:06:34.513 SYMLINK libspdk_event.so 00:06:34.772 SYMLINK libspdk_nvme.so 00:06:35.032 LIB libspdk_fuse_dispatcher.a 00:06:35.032 SO libspdk_fuse_dispatcher.so.1.0 00:06:35.032 SYMLINK libspdk_fuse_dispatcher.so 00:06:36.407 LIB libspdk_blob.a 00:06:36.407 SO libspdk_blob.so.11.0 00:06:36.407 SYMLINK libspdk_blob.so 00:06:36.665 CC lib/blobfs/blobfs.o 00:06:36.665 CC lib/blobfs/tree.o 00:06:36.666 CC lib/lvol/lvol.o 00:06:37.234 LIB libspdk_bdev.a 00:06:37.234 SO libspdk_bdev.so.17.0 00:06:37.492 SYMLINK libspdk_bdev.so 00:06:37.751 CC lib/nvmf/ctrlr.o 00:06:37.751 CC lib/nvmf/ctrlr_discovery.o 00:06:37.751 CC lib/nvmf/subsystem.o 00:06:37.751 CC lib/nvmf/ctrlr_bdev.o 00:06:37.751 CC lib/nbd/nbd.o 00:06:37.751 CC lib/ublk/ublk.o 00:06:37.751 CC lib/scsi/dev.o 00:06:37.751 CC lib/ftl/ftl_core.o 00:06:37.751 LIB libspdk_blobfs.a 00:06:37.751 SO libspdk_blobfs.so.10.0 00:06:37.751 LIB libspdk_lvol.a 00:06:37.751 SO libspdk_lvol.so.10.0 00:06:38.009 SYMLINK libspdk_blobfs.so 00:06:38.009 CC lib/ublk/ublk_rpc.o 00:06:38.009 SYMLINK libspdk_lvol.so 00:06:38.009 CC lib/nbd/nbd_rpc.o 00:06:38.009 CC lib/scsi/lun.o 00:06:38.267 CC lib/ftl/ftl_init.o 00:06:38.267 CC lib/nvmf/nvmf.o 00:06:38.267 CC lib/ftl/ftl_layout.o 00:06:38.267 CC lib/ftl/ftl_debug.o 00:06:38.267 CC lib/scsi/port.o 00:06:38.525 CC lib/ftl/ftl_io.o 00:06:38.525 CC lib/nvmf/nvmf_rpc.o 00:06:38.525 CC lib/scsi/scsi.o 00:06:38.525 CC lib/nvmf/transport.o 00:06:38.525 LIB libspdk_nbd.a 00:06:38.783 CC lib/ftl/ftl_sb.o 00:06:38.784 SO libspdk_nbd.so.7.0 00:06:38.784 CC lib/scsi/scsi_bdev.o 00:06:38.784 SYMLINK libspdk_nbd.so 00:06:38.784 CC lib/scsi/scsi_pr.o 00:06:38.784 CC lib/scsi/scsi_rpc.o 00:06:38.784 CC lib/ftl/ftl_l2p.o 00:06:39.042 LIB libspdk_ublk.a 00:06:39.042 SO libspdk_ublk.so.3.0 00:06:39.042 CC lib/scsi/task.o 00:06:39.042 SYMLINK libspdk_ublk.so 00:06:39.042 CC lib/ftl/ftl_l2p_flat.o 00:06:39.042 CC lib/nvmf/tcp.o 00:06:39.301 CC lib/ftl/ftl_nv_cache.o 00:06:39.301 CC lib/nvmf/stubs.o 00:06:39.301 CC lib/nvmf/mdns_server.o 00:06:39.301 LIB libspdk_scsi.a 00:06:39.559 SO libspdk_scsi.so.9.0 00:06:39.559 CC lib/nvmf/rdma.o 00:06:39.559 CC lib/ftl/ftl_band.o 00:06:39.559 SYMLINK libspdk_scsi.so 00:06:39.559 CC lib/ftl/ftl_band_ops.o 00:06:39.559 CC lib/ftl/ftl_writer.o 00:06:39.817 CC lib/ftl/ftl_rq.o 00:06:40.076 CC lib/ftl/ftl_reloc.o 00:06:40.076 CC lib/nvmf/auth.o 00:06:40.076 CC lib/iscsi/conn.o 00:06:40.076 CC lib/iscsi/init_grp.o 00:06:40.076 CC lib/vhost/vhost.o 00:06:40.335 CC lib/iscsi/iscsi.o 00:06:40.593 CC lib/ftl/ftl_l2p_cache.o 00:06:40.593 CC lib/ftl/ftl_p2l.o 00:06:40.593 CC lib/iscsi/param.o 00:06:40.852 CC lib/vhost/vhost_rpc.o 00:06:40.852 CC lib/iscsi/portal_grp.o 00:06:41.112 CC lib/ftl/ftl_p2l_log.o 00:06:41.112 CC lib/ftl/mngt/ftl_mngt.o 00:06:41.112 CC lib/vhost/vhost_scsi.o 00:06:41.112 CC lib/vhost/vhost_blk.o 00:06:41.370 CC lib/iscsi/tgt_node.o 00:06:41.370 CC lib/iscsi/iscsi_subsystem.o 00:06:41.370 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:41.628 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:41.628 CC lib/vhost/rte_vhost_user.o 00:06:41.628 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:41.628 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:41.628 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:41.887 CC lib/iscsi/iscsi_rpc.o 00:06:41.887 CC lib/iscsi/task.o 00:06:42.145 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:42.145 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:42.145 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:42.145 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:42.145 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:42.145 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:42.404 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:42.404 CC lib/ftl/utils/ftl_conf.o 00:06:42.404 LIB libspdk_iscsi.a 00:06:42.404 CC lib/ftl/utils/ftl_md.o 00:06:42.404 CC lib/ftl/utils/ftl_mempool.o 00:06:42.404 CC lib/ftl/utils/ftl_bitmap.o 00:06:42.404 SO libspdk_iscsi.so.8.0 00:06:42.662 CC lib/ftl/utils/ftl_property.o 00:06:42.662 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:42.662 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:42.662 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:42.662 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:42.662 SYMLINK libspdk_iscsi.so 00:06:42.662 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:42.662 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:42.920 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:42.920 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:42.920 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:42.920 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:42.920 LIB libspdk_vhost.a 00:06:42.920 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:42.920 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:42.920 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:43.178 CC lib/ftl/base/ftl_base_bdev.o 00:06:43.178 CC lib/ftl/base/ftl_base_dev.o 00:06:43.178 SO libspdk_vhost.so.8.0 00:06:43.178 LIB libspdk_nvmf.a 00:06:43.178 CC lib/ftl/ftl_trace.o 00:06:43.178 SYMLINK libspdk_vhost.so 00:06:43.178 SO libspdk_nvmf.so.20.0 00:06:43.437 LIB libspdk_ftl.a 00:06:43.437 SYMLINK libspdk_nvmf.so 00:06:43.695 SO libspdk_ftl.so.9.0 00:06:43.954 SYMLINK libspdk_ftl.so 00:06:44.519 CC module/env_dpdk/env_dpdk_rpc.o 00:06:44.519 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:44.519 CC module/keyring/file/keyring.o 00:06:44.519 CC module/accel/error/accel_error.o 00:06:44.519 CC module/fsdev/aio/fsdev_aio.o 00:06:44.519 CC module/accel/ioat/accel_ioat.o 00:06:44.519 CC module/accel/dsa/accel_dsa.o 00:06:44.519 CC module/sock/posix/posix.o 00:06:44.519 CC module/keyring/linux/keyring.o 00:06:44.519 CC module/blob/bdev/blob_bdev.o 00:06:44.519 LIB libspdk_env_dpdk_rpc.a 00:06:44.519 CC module/keyring/file/keyring_rpc.o 00:06:44.519 SO libspdk_env_dpdk_rpc.so.6.0 00:06:44.778 CC module/accel/ioat/accel_ioat_rpc.o 00:06:44.778 CC module/accel/error/accel_error_rpc.o 00:06:44.778 LIB libspdk_scheduler_dynamic.a 00:06:44.778 CC module/keyring/linux/keyring_rpc.o 00:06:44.778 SYMLINK libspdk_env_dpdk_rpc.so 00:06:44.778 SO libspdk_scheduler_dynamic.so.4.0 00:06:44.778 LIB libspdk_keyring_file.a 00:06:44.778 LIB libspdk_blob_bdev.a 00:06:44.778 SO libspdk_keyring_file.so.2.0 00:06:44.778 CC module/accel/dsa/accel_dsa_rpc.o 00:06:44.778 SYMLINK libspdk_scheduler_dynamic.so 00:06:44.778 SO libspdk_blob_bdev.so.11.0 00:06:44.778 LIB libspdk_accel_error.a 00:06:44.778 SYMLINK libspdk_keyring_file.so 00:06:44.778 LIB libspdk_keyring_linux.a 00:06:44.778 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:44.778 SO libspdk_accel_error.so.2.0 00:06:45.036 SYMLINK libspdk_blob_bdev.so 00:06:45.036 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:45.036 SO libspdk_keyring_linux.so.1.0 00:06:45.036 LIB libspdk_accel_ioat.a 00:06:45.036 SO libspdk_accel_ioat.so.6.0 00:06:45.036 SYMLINK libspdk_accel_error.so 00:06:45.036 SYMLINK libspdk_keyring_linux.so 00:06:45.036 LIB libspdk_accel_dsa.a 00:06:45.036 CC module/scheduler/gscheduler/gscheduler.o 00:06:45.036 SYMLINK libspdk_accel_ioat.so 00:06:45.036 CC module/fsdev/aio/linux_aio_mgr.o 00:06:45.036 SO libspdk_accel_dsa.so.5.0 00:06:45.036 LIB libspdk_scheduler_dpdk_governor.a 00:06:45.036 CC module/accel/iaa/accel_iaa.o 00:06:45.036 SYMLINK libspdk_accel_dsa.so 00:06:45.036 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:45.036 CC module/accel/iaa/accel_iaa_rpc.o 00:06:45.294 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:45.294 LIB libspdk_scheduler_gscheduler.a 00:06:45.294 SO libspdk_scheduler_gscheduler.so.4.0 00:06:45.294 CC module/bdev/delay/vbdev_delay.o 00:06:45.294 SYMLINK libspdk_scheduler_gscheduler.so 00:06:45.294 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:45.294 CC module/blobfs/bdev/blobfs_bdev.o 00:06:45.294 CC module/bdev/error/vbdev_error.o 00:06:45.294 LIB libspdk_fsdev_aio.a 00:06:45.294 LIB libspdk_accel_iaa.a 00:06:45.552 SO libspdk_fsdev_aio.so.1.0 00:06:45.552 CC module/bdev/gpt/gpt.o 00:06:45.552 SO libspdk_accel_iaa.so.3.0 00:06:45.552 CC module/bdev/lvol/vbdev_lvol.o 00:06:45.552 CC module/bdev/malloc/bdev_malloc.o 00:06:45.552 SYMLINK libspdk_accel_iaa.so 00:06:45.552 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:45.552 SYMLINK libspdk_fsdev_aio.so 00:06:45.552 CC module/bdev/gpt/vbdev_gpt.o 00:06:45.552 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:45.810 CC module/bdev/error/vbdev_error_rpc.o 00:06:45.810 LIB libspdk_sock_posix.a 00:06:45.810 CC module/bdev/null/bdev_null.o 00:06:45.810 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:45.810 LIB libspdk_blobfs_bdev.a 00:06:45.810 SO libspdk_sock_posix.so.6.0 00:06:45.810 LIB libspdk_bdev_delay.a 00:06:45.810 SO libspdk_blobfs_bdev.so.6.0 00:06:45.810 SO libspdk_bdev_delay.so.6.0 00:06:45.810 CC module/bdev/nvme/bdev_nvme.o 00:06:45.810 SYMLINK libspdk_sock_posix.so 00:06:45.810 CC module/bdev/null/bdev_null_rpc.o 00:06:45.810 LIB libspdk_bdev_error.a 00:06:45.810 SYMLINK libspdk_blobfs_bdev.so 00:06:45.810 SYMLINK libspdk_bdev_delay.so 00:06:45.810 LIB libspdk_bdev_gpt.a 00:06:46.068 SO libspdk_bdev_error.so.6.0 00:06:46.068 SO libspdk_bdev_gpt.so.6.0 00:06:46.068 LIB libspdk_bdev_malloc.a 00:06:46.068 SYMLINK libspdk_bdev_gpt.so 00:06:46.068 SO libspdk_bdev_malloc.so.6.0 00:06:46.068 SYMLINK libspdk_bdev_error.so 00:06:46.068 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:46.068 CC module/bdev/passthru/vbdev_passthru.o 00:06:46.068 LIB libspdk_bdev_null.a 00:06:46.068 CC module/bdev/raid/bdev_raid.o 00:06:46.068 SYMLINK libspdk_bdev_malloc.so 00:06:46.068 SO libspdk_bdev_null.so.6.0 00:06:46.326 SYMLINK libspdk_bdev_null.so 00:06:46.326 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:46.326 CC module/bdev/split/vbdev_split.o 00:06:46.326 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:46.326 CC module/bdev/aio/bdev_aio.o 00:06:46.326 CC module/bdev/ftl/bdev_ftl.o 00:06:46.326 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:46.585 LIB libspdk_bdev_lvol.a 00:06:46.585 CC module/bdev/split/vbdev_split_rpc.o 00:06:46.585 LIB libspdk_bdev_passthru.a 00:06:46.585 SO libspdk_bdev_lvol.so.6.0 00:06:46.585 SO libspdk_bdev_passthru.so.6.0 00:06:46.585 SYMLINK libspdk_bdev_passthru.so 00:06:46.585 CC module/bdev/aio/bdev_aio_rpc.o 00:06:46.585 SYMLINK libspdk_bdev_lvol.so 00:06:46.585 CC module/bdev/raid/bdev_raid_rpc.o 00:06:46.585 CC module/bdev/raid/bdev_raid_sb.o 00:06:46.585 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:46.843 LIB libspdk_bdev_split.a 00:06:46.843 CC module/bdev/raid/raid0.o 00:06:46.843 LIB libspdk_bdev_ftl.a 00:06:46.843 SO libspdk_bdev_split.so.6.0 00:06:46.843 SO libspdk_bdev_ftl.so.6.0 00:06:46.843 SYMLINK libspdk_bdev_split.so 00:06:46.843 CC module/bdev/nvme/nvme_rpc.o 00:06:46.843 SYMLINK libspdk_bdev_ftl.so 00:06:46.843 LIB libspdk_bdev_aio.a 00:06:46.843 SO libspdk_bdev_aio.so.6.0 00:06:46.843 LIB libspdk_bdev_zone_block.a 00:06:46.843 SO libspdk_bdev_zone_block.so.6.0 00:06:46.843 CC module/bdev/raid/raid1.o 00:06:47.102 SYMLINK libspdk_bdev_aio.so 00:06:47.102 CC module/bdev/nvme/bdev_mdns_client.o 00:06:47.102 CC module/bdev/raid/concat.o 00:06:47.102 SYMLINK libspdk_bdev_zone_block.so 00:06:47.102 CC module/bdev/nvme/vbdev_opal.o 00:06:47.102 CC module/bdev/iscsi/bdev_iscsi.o 00:06:47.102 CC module/bdev/raid/raid5f.o 00:06:47.102 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:47.102 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:47.360 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:47.360 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:47.360 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:47.360 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:47.618 LIB libspdk_bdev_iscsi.a 00:06:47.618 SO libspdk_bdev_iscsi.so.6.0 00:06:47.618 SYMLINK libspdk_bdev_iscsi.so 00:06:47.876 LIB libspdk_bdev_raid.a 00:06:47.876 SO libspdk_bdev_raid.so.6.0 00:06:47.876 LIB libspdk_bdev_virtio.a 00:06:47.877 SO libspdk_bdev_virtio.so.6.0 00:06:47.877 SYMLINK libspdk_bdev_raid.so 00:06:48.135 SYMLINK libspdk_bdev_virtio.so 00:06:49.512 LIB libspdk_bdev_nvme.a 00:06:49.512 SO libspdk_bdev_nvme.so.7.1 00:06:49.770 SYMLINK libspdk_bdev_nvme.so 00:06:50.028 CC module/event/subsystems/iobuf/iobuf.o 00:06:50.028 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:50.028 CC module/event/subsystems/scheduler/scheduler.o 00:06:50.028 CC module/event/subsystems/sock/sock.o 00:06:50.028 CC module/event/subsystems/vmd/vmd.o 00:06:50.028 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:50.028 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:50.028 CC module/event/subsystems/keyring/keyring.o 00:06:50.028 CC module/event/subsystems/fsdev/fsdev.o 00:06:50.351 LIB libspdk_event_sock.a 00:06:50.351 LIB libspdk_event_keyring.a 00:06:50.351 LIB libspdk_event_vhost_blk.a 00:06:50.351 SO libspdk_event_sock.so.5.0 00:06:50.351 SO libspdk_event_keyring.so.1.0 00:06:50.351 SO libspdk_event_vhost_blk.so.3.0 00:06:50.351 LIB libspdk_event_fsdev.a 00:06:50.351 SYMLINK libspdk_event_sock.so 00:06:50.351 LIB libspdk_event_vmd.a 00:06:50.351 LIB libspdk_event_iobuf.a 00:06:50.351 LIB libspdk_event_scheduler.a 00:06:50.351 SYMLINK libspdk_event_keyring.so 00:06:50.351 SO libspdk_event_fsdev.so.1.0 00:06:50.351 SYMLINK libspdk_event_vhost_blk.so 00:06:50.351 SO libspdk_event_scheduler.so.4.0 00:06:50.351 SO libspdk_event_iobuf.so.3.0 00:06:50.351 SO libspdk_event_vmd.so.6.0 00:06:50.610 SYMLINK libspdk_event_fsdev.so 00:06:50.610 SYMLINK libspdk_event_scheduler.so 00:06:50.610 SYMLINK libspdk_event_vmd.so 00:06:50.610 SYMLINK libspdk_event_iobuf.so 00:06:50.868 CC module/event/subsystems/accel/accel.o 00:06:50.868 LIB libspdk_event_accel.a 00:06:51.127 SO libspdk_event_accel.so.6.0 00:06:51.127 SYMLINK libspdk_event_accel.so 00:06:51.385 CC module/event/subsystems/bdev/bdev.o 00:06:51.643 LIB libspdk_event_bdev.a 00:06:51.643 SO libspdk_event_bdev.so.6.0 00:06:51.643 SYMLINK libspdk_event_bdev.so 00:06:51.902 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:51.902 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:51.902 CC module/event/subsystems/nbd/nbd.o 00:06:51.902 CC module/event/subsystems/scsi/scsi.o 00:06:51.902 CC module/event/subsystems/ublk/ublk.o 00:06:51.902 LIB libspdk_event_ublk.a 00:06:51.902 LIB libspdk_event_scsi.a 00:06:51.902 LIB libspdk_event_nbd.a 00:06:51.902 SO libspdk_event_ublk.so.3.0 00:06:52.161 SO libspdk_event_scsi.so.6.0 00:06:52.161 SO libspdk_event_nbd.so.6.0 00:06:52.161 SYMLINK libspdk_event_ublk.so 00:06:52.161 SYMLINK libspdk_event_scsi.so 00:06:52.161 LIB libspdk_event_nvmf.a 00:06:52.161 SYMLINK libspdk_event_nbd.so 00:06:52.161 SO libspdk_event_nvmf.so.6.0 00:06:52.161 SYMLINK libspdk_event_nvmf.so 00:06:52.419 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:52.419 CC module/event/subsystems/iscsi/iscsi.o 00:06:52.419 LIB libspdk_event_vhost_scsi.a 00:06:52.419 LIB libspdk_event_iscsi.a 00:06:52.419 SO libspdk_event_vhost_scsi.so.3.0 00:06:52.419 SO libspdk_event_iscsi.so.6.0 00:06:52.678 SYMLINK libspdk_event_vhost_scsi.so 00:06:52.678 SYMLINK libspdk_event_iscsi.so 00:06:52.678 SO libspdk.so.6.0 00:06:52.678 SYMLINK libspdk.so 00:06:52.937 CC app/spdk_nvme_perf/perf.o 00:06:52.937 CXX app/trace/trace.o 00:06:52.937 CC app/spdk_nvme_identify/identify.o 00:06:52.937 CC app/spdk_lspci/spdk_lspci.o 00:06:52.937 CC app/trace_record/trace_record.o 00:06:52.937 CC app/nvmf_tgt/nvmf_main.o 00:06:53.195 CC app/iscsi_tgt/iscsi_tgt.o 00:06:53.195 CC app/spdk_tgt/spdk_tgt.o 00:06:53.195 CC test/thread/poller_perf/poller_perf.o 00:06:53.195 CC examples/util/zipf/zipf.o 00:06:53.195 LINK spdk_lspci 00:06:53.195 LINK nvmf_tgt 00:06:53.454 LINK poller_perf 00:06:53.454 LINK spdk_trace_record 00:06:53.454 LINK zipf 00:06:53.454 LINK spdk_tgt 00:06:53.454 LINK iscsi_tgt 00:06:53.454 LINK spdk_trace 00:06:53.729 CC app/spdk_nvme_discover/discovery_aer.o 00:06:53.729 CC app/spdk_top/spdk_top.o 00:06:53.729 CC app/spdk_dd/spdk_dd.o 00:06:53.729 CC examples/ioat/perf/perf.o 00:06:53.729 CC examples/ioat/verify/verify.o 00:06:53.988 LINK spdk_nvme_discover 00:06:53.988 CC test/dma/test_dma/test_dma.o 00:06:53.988 CC app/fio/nvme/fio_plugin.o 00:06:53.988 CC app/vhost/vhost.o 00:06:53.988 LINK verify 00:06:54.246 LINK spdk_nvme_perf 00:06:54.246 LINK vhost 00:06:54.246 LINK ioat_perf 00:06:54.246 LINK spdk_dd 00:06:54.246 LINK spdk_nvme_identify 00:06:54.505 CC examples/vmd/lsvmd/lsvmd.o 00:06:54.505 CC examples/vmd/led/led.o 00:06:54.505 LINK test_dma 00:06:54.505 CC test/app/bdev_svc/bdev_svc.o 00:06:54.505 LINK lsvmd 00:06:54.505 CC app/fio/bdev/fio_plugin.o 00:06:54.505 CC examples/idxd/perf/perf.o 00:06:54.505 LINK led 00:06:54.505 CC test/app/histogram_perf/histogram_perf.o 00:06:54.765 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:54.765 LINK spdk_nvme 00:06:54.765 LINK bdev_svc 00:06:54.765 LINK histogram_perf 00:06:54.765 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:54.765 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:55.114 TEST_HEADER include/spdk/accel.h 00:06:55.114 TEST_HEADER include/spdk/accel_module.h 00:06:55.114 TEST_HEADER include/spdk/assert.h 00:06:55.114 TEST_HEADER include/spdk/barrier.h 00:06:55.114 TEST_HEADER include/spdk/base64.h 00:06:55.114 TEST_HEADER include/spdk/bdev.h 00:06:55.114 TEST_HEADER include/spdk/bdev_module.h 00:06:55.114 TEST_HEADER include/spdk/bdev_zone.h 00:06:55.114 TEST_HEADER include/spdk/bit_array.h 00:06:55.114 TEST_HEADER include/spdk/bit_pool.h 00:06:55.114 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:55.114 TEST_HEADER include/spdk/blob_bdev.h 00:06:55.114 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:55.114 TEST_HEADER include/spdk/blobfs.h 00:06:55.114 TEST_HEADER include/spdk/blob.h 00:06:55.114 TEST_HEADER include/spdk/conf.h 00:06:55.114 TEST_HEADER include/spdk/config.h 00:06:55.114 TEST_HEADER include/spdk/cpuset.h 00:06:55.114 TEST_HEADER include/spdk/crc16.h 00:06:55.114 TEST_HEADER include/spdk/crc32.h 00:06:55.114 TEST_HEADER include/spdk/crc64.h 00:06:55.114 TEST_HEADER include/spdk/dif.h 00:06:55.114 TEST_HEADER include/spdk/dma.h 00:06:55.114 TEST_HEADER include/spdk/endian.h 00:06:55.114 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:55.114 TEST_HEADER include/spdk/env_dpdk.h 00:06:55.114 TEST_HEADER include/spdk/env.h 00:06:55.114 TEST_HEADER include/spdk/event.h 00:06:55.114 LINK spdk_top 00:06:55.114 TEST_HEADER include/spdk/fd_group.h 00:06:55.114 TEST_HEADER include/spdk/fd.h 00:06:55.114 TEST_HEADER include/spdk/file.h 00:06:55.114 TEST_HEADER include/spdk/fsdev.h 00:06:55.114 TEST_HEADER include/spdk/fsdev_module.h 00:06:55.114 TEST_HEADER include/spdk/ftl.h 00:06:55.114 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:55.114 TEST_HEADER include/spdk/gpt_spec.h 00:06:55.114 TEST_HEADER include/spdk/hexlify.h 00:06:55.114 TEST_HEADER include/spdk/histogram_data.h 00:06:55.114 TEST_HEADER include/spdk/idxd.h 00:06:55.114 TEST_HEADER include/spdk/idxd_spec.h 00:06:55.114 TEST_HEADER include/spdk/init.h 00:06:55.114 TEST_HEADER include/spdk/ioat.h 00:06:55.114 TEST_HEADER include/spdk/ioat_spec.h 00:06:55.114 TEST_HEADER include/spdk/iscsi_spec.h 00:06:55.114 TEST_HEADER include/spdk/json.h 00:06:55.114 TEST_HEADER include/spdk/jsonrpc.h 00:06:55.114 TEST_HEADER include/spdk/keyring.h 00:06:55.114 TEST_HEADER include/spdk/keyring_module.h 00:06:55.114 TEST_HEADER include/spdk/likely.h 00:06:55.114 TEST_HEADER include/spdk/log.h 00:06:55.114 LINK idxd_perf 00:06:55.114 TEST_HEADER include/spdk/lvol.h 00:06:55.114 TEST_HEADER include/spdk/md5.h 00:06:55.114 TEST_HEADER include/spdk/memory.h 00:06:55.114 TEST_HEADER include/spdk/mmio.h 00:06:55.114 TEST_HEADER include/spdk/nbd.h 00:06:55.114 TEST_HEADER include/spdk/net.h 00:06:55.114 TEST_HEADER include/spdk/notify.h 00:06:55.114 TEST_HEADER include/spdk/nvme.h 00:06:55.114 TEST_HEADER include/spdk/nvme_intel.h 00:06:55.114 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:55.114 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:55.114 TEST_HEADER include/spdk/nvme_spec.h 00:06:55.114 TEST_HEADER include/spdk/nvme_zns.h 00:06:55.114 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:55.114 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:55.114 TEST_HEADER include/spdk/nvmf.h 00:06:55.114 TEST_HEADER include/spdk/nvmf_spec.h 00:06:55.114 TEST_HEADER include/spdk/nvmf_transport.h 00:06:55.114 TEST_HEADER include/spdk/opal.h 00:06:55.114 TEST_HEADER include/spdk/opal_spec.h 00:06:55.114 TEST_HEADER include/spdk/pci_ids.h 00:06:55.114 TEST_HEADER include/spdk/pipe.h 00:06:55.114 TEST_HEADER include/spdk/queue.h 00:06:55.114 TEST_HEADER include/spdk/reduce.h 00:06:55.114 TEST_HEADER include/spdk/rpc.h 00:06:55.114 TEST_HEADER include/spdk/scheduler.h 00:06:55.114 TEST_HEADER include/spdk/scsi.h 00:06:55.114 TEST_HEADER include/spdk/scsi_spec.h 00:06:55.114 TEST_HEADER include/spdk/sock.h 00:06:55.114 TEST_HEADER include/spdk/stdinc.h 00:06:55.114 TEST_HEADER include/spdk/string.h 00:06:55.114 TEST_HEADER include/spdk/thread.h 00:06:55.114 TEST_HEADER include/spdk/trace.h 00:06:55.114 TEST_HEADER include/spdk/trace_parser.h 00:06:55.114 TEST_HEADER include/spdk/tree.h 00:06:55.114 TEST_HEADER include/spdk/ublk.h 00:06:55.114 TEST_HEADER include/spdk/util.h 00:06:55.114 TEST_HEADER include/spdk/uuid.h 00:06:55.114 TEST_HEADER include/spdk/version.h 00:06:55.114 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:55.114 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:55.114 TEST_HEADER include/spdk/vhost.h 00:06:55.114 TEST_HEADER include/spdk/vmd.h 00:06:55.114 CC test/app/jsoncat/jsoncat.o 00:06:55.114 TEST_HEADER include/spdk/xor.h 00:06:55.114 TEST_HEADER include/spdk/zipf.h 00:06:55.114 CXX test/cpp_headers/accel.o 00:06:55.114 CXX test/cpp_headers/accel_module.o 00:06:55.114 LINK interrupt_tgt 00:06:55.114 LINK spdk_bdev 00:06:55.114 LINK nvme_fuzz 00:06:55.114 CC test/env/mem_callbacks/mem_callbacks.o 00:06:55.389 CC test/app/stub/stub.o 00:06:55.389 LINK jsoncat 00:06:55.389 CXX test/cpp_headers/assert.o 00:06:55.389 CXX test/cpp_headers/barrier.o 00:06:55.389 LINK vhost_fuzz 00:06:55.389 LINK stub 00:06:55.389 CC test/rpc_client/rpc_client_test.o 00:06:55.389 CC test/event/event_perf/event_perf.o 00:06:55.389 CC test/nvme/aer/aer.o 00:06:55.648 CC test/nvme/reset/reset.o 00:06:55.648 CC examples/thread/thread/thread_ex.o 00:06:55.648 CXX test/cpp_headers/base64.o 00:06:55.648 LINK rpc_client_test 00:06:55.648 CC test/nvme/sgl/sgl.o 00:06:55.648 LINK event_perf 00:06:55.907 CXX test/cpp_headers/bdev.o 00:06:55.908 CC test/accel/dif/dif.o 00:06:55.908 LINK thread 00:06:55.908 LINK aer 00:06:55.908 LINK mem_callbacks 00:06:55.908 LINK reset 00:06:55.908 CXX test/cpp_headers/bdev_module.o 00:06:56.166 LINK sgl 00:06:56.166 CC test/event/reactor/reactor.o 00:06:56.166 CC test/blobfs/mkfs/mkfs.o 00:06:56.166 CC test/env/vtophys/vtophys.o 00:06:56.166 CC test/event/reactor_perf/reactor_perf.o 00:06:56.166 LINK reactor 00:06:56.166 CC examples/sock/hello_world/hello_sock.o 00:06:56.166 CXX test/cpp_headers/bdev_zone.o 00:06:56.425 CC test/nvme/e2edp/nvme_dp.o 00:06:56.425 LINK mkfs 00:06:56.425 LINK vtophys 00:06:56.425 CC test/lvol/esnap/esnap.o 00:06:56.425 LINK reactor_perf 00:06:56.425 CXX test/cpp_headers/bit_array.o 00:06:56.425 CC test/event/app_repeat/app_repeat.o 00:06:56.683 LINK hello_sock 00:06:56.683 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:56.684 LINK nvme_dp 00:06:56.684 CC test/event/scheduler/scheduler.o 00:06:56.684 CXX test/cpp_headers/bit_pool.o 00:06:56.684 LINK app_repeat 00:06:56.684 CC examples/accel/perf/accel_perf.o 00:06:56.684 LINK dif 00:06:56.684 LINK env_dpdk_post_init 00:06:56.943 CC test/nvme/overhead/overhead.o 00:06:56.943 CXX test/cpp_headers/blob_bdev.o 00:06:56.943 CC test/env/memory/memory_ut.o 00:06:56.943 LINK scheduler 00:06:56.943 LINK iscsi_fuzz 00:06:57.201 CC test/nvme/err_injection/err_injection.o 00:06:57.201 CXX test/cpp_headers/blobfs_bdev.o 00:06:57.201 CC test/env/pci/pci_ut.o 00:06:57.201 CC examples/blob/hello_world/hello_blob.o 00:06:57.201 LINK overhead 00:06:57.201 LINK err_injection 00:06:57.201 CXX test/cpp_headers/blobfs.o 00:06:57.459 LINK accel_perf 00:06:57.459 CC test/bdev/bdevio/bdevio.o 00:06:57.459 CC test/nvme/startup/startup.o 00:06:57.459 CC examples/nvme/hello_world/hello_world.o 00:06:57.459 LINK hello_blob 00:06:57.459 CXX test/cpp_headers/blob.o 00:06:57.459 CXX test/cpp_headers/conf.o 00:06:57.459 CC test/nvme/reserve/reserve.o 00:06:57.718 LINK pci_ut 00:06:57.718 LINK startup 00:06:57.718 LINK hello_world 00:06:57.718 CXX test/cpp_headers/config.o 00:06:57.718 CXX test/cpp_headers/cpuset.o 00:06:57.718 CC test/nvme/simple_copy/simple_copy.o 00:06:57.718 CC examples/blob/cli/blobcli.o 00:06:57.718 LINK reserve 00:06:57.977 LINK bdevio 00:06:57.977 CC test/nvme/connect_stress/connect_stress.o 00:06:57.977 CXX test/cpp_headers/crc16.o 00:06:57.977 CC test/nvme/boot_partition/boot_partition.o 00:06:57.977 CC examples/nvme/reconnect/reconnect.o 00:06:57.977 LINK simple_copy 00:06:58.235 CXX test/cpp_headers/crc32.o 00:06:58.235 LINK connect_stress 00:06:58.235 LINK boot_partition 00:06:58.235 CC test/nvme/compliance/nvme_compliance.o 00:06:58.235 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:58.235 CC test/nvme/fused_ordering/fused_ordering.o 00:06:58.235 CXX test/cpp_headers/crc64.o 00:06:58.493 LINK memory_ut 00:06:58.493 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:58.493 LINK blobcli 00:06:58.493 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:58.493 CXX test/cpp_headers/dif.o 00:06:58.493 LINK reconnect 00:06:58.493 LINK hello_fsdev 00:06:58.493 LINK fused_ordering 00:06:58.751 LINK nvme_compliance 00:06:58.751 CXX test/cpp_headers/dma.o 00:06:58.751 LINK doorbell_aers 00:06:58.751 CC examples/bdev/hello_world/hello_bdev.o 00:06:58.751 CC test/nvme/fdp/fdp.o 00:06:58.751 CC test/nvme/cuse/cuse.o 00:06:59.010 CC examples/nvme/arbitration/arbitration.o 00:06:59.010 CC examples/bdev/bdevperf/bdevperf.o 00:06:59.010 CXX test/cpp_headers/endian.o 00:06:59.010 CXX test/cpp_headers/env_dpdk.o 00:06:59.010 CC examples/nvme/hotplug/hotplug.o 00:06:59.010 LINK nvme_manage 00:06:59.268 LINK hello_bdev 00:06:59.268 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:59.268 CXX test/cpp_headers/env.o 00:06:59.268 LINK hotplug 00:06:59.268 LINK fdp 00:06:59.268 LINK arbitration 00:06:59.268 CC examples/nvme/abort/abort.o 00:06:59.527 CXX test/cpp_headers/event.o 00:06:59.527 CXX test/cpp_headers/fd_group.o 00:06:59.527 LINK cmb_copy 00:06:59.527 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:59.527 CXX test/cpp_headers/fd.o 00:06:59.527 CXX test/cpp_headers/file.o 00:06:59.527 CXX test/cpp_headers/fsdev.o 00:06:59.527 CXX test/cpp_headers/fsdev_module.o 00:06:59.527 CXX test/cpp_headers/ftl.o 00:06:59.787 LINK pmr_persistence 00:06:59.787 CXX test/cpp_headers/fuse_dispatcher.o 00:06:59.787 CXX test/cpp_headers/gpt_spec.o 00:06:59.787 CXX test/cpp_headers/hexlify.o 00:06:59.787 LINK abort 00:06:59.787 CXX test/cpp_headers/histogram_data.o 00:06:59.787 CXX test/cpp_headers/idxd.o 00:06:59.787 CXX test/cpp_headers/idxd_spec.o 00:06:59.787 CXX test/cpp_headers/init.o 00:07:00.045 LINK bdevperf 00:07:00.045 CXX test/cpp_headers/ioat.o 00:07:00.045 CXX test/cpp_headers/ioat_spec.o 00:07:00.045 CXX test/cpp_headers/iscsi_spec.o 00:07:00.045 CXX test/cpp_headers/json.o 00:07:00.045 CXX test/cpp_headers/jsonrpc.o 00:07:00.045 CXX test/cpp_headers/keyring.o 00:07:00.045 CXX test/cpp_headers/keyring_module.o 00:07:00.304 CXX test/cpp_headers/likely.o 00:07:00.304 CXX test/cpp_headers/log.o 00:07:00.304 CXX test/cpp_headers/lvol.o 00:07:00.304 CXX test/cpp_headers/md5.o 00:07:00.304 CXX test/cpp_headers/memory.o 00:07:00.304 CXX test/cpp_headers/mmio.o 00:07:00.304 CXX test/cpp_headers/nbd.o 00:07:00.304 CXX test/cpp_headers/net.o 00:07:00.304 CC examples/nvmf/nvmf/nvmf.o 00:07:00.304 CXX test/cpp_headers/notify.o 00:07:00.304 CXX test/cpp_headers/nvme.o 00:07:00.304 CXX test/cpp_headers/nvme_intel.o 00:07:00.304 CXX test/cpp_headers/nvme_ocssd.o 00:07:00.563 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:00.563 CXX test/cpp_headers/nvme_spec.o 00:07:00.563 CXX test/cpp_headers/nvme_zns.o 00:07:00.563 LINK cuse 00:07:00.563 CXX test/cpp_headers/nvmf_cmd.o 00:07:00.563 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:00.563 CXX test/cpp_headers/nvmf.o 00:07:00.563 CXX test/cpp_headers/nvmf_spec.o 00:07:00.563 CXX test/cpp_headers/nvmf_transport.o 00:07:00.563 CXX test/cpp_headers/opal.o 00:07:00.822 LINK nvmf 00:07:00.822 CXX test/cpp_headers/opal_spec.o 00:07:00.822 CXX test/cpp_headers/pci_ids.o 00:07:00.822 CXX test/cpp_headers/pipe.o 00:07:00.822 CXX test/cpp_headers/queue.o 00:07:00.822 CXX test/cpp_headers/reduce.o 00:07:00.822 CXX test/cpp_headers/rpc.o 00:07:00.822 CXX test/cpp_headers/scheduler.o 00:07:00.822 CXX test/cpp_headers/scsi.o 00:07:00.822 CXX test/cpp_headers/scsi_spec.o 00:07:00.822 CXX test/cpp_headers/sock.o 00:07:00.822 CXX test/cpp_headers/stdinc.o 00:07:00.822 CXX test/cpp_headers/string.o 00:07:01.081 CXX test/cpp_headers/thread.o 00:07:01.081 CXX test/cpp_headers/trace.o 00:07:01.081 CXX test/cpp_headers/trace_parser.o 00:07:01.081 CXX test/cpp_headers/tree.o 00:07:01.081 CXX test/cpp_headers/ublk.o 00:07:01.081 CXX test/cpp_headers/util.o 00:07:01.081 CXX test/cpp_headers/uuid.o 00:07:01.081 CXX test/cpp_headers/version.o 00:07:01.081 CXX test/cpp_headers/vfio_user_pci.o 00:07:01.081 CXX test/cpp_headers/vfio_user_spec.o 00:07:01.081 CXX test/cpp_headers/vhost.o 00:07:01.081 CXX test/cpp_headers/vmd.o 00:07:01.339 CXX test/cpp_headers/xor.o 00:07:01.339 CXX test/cpp_headers/zipf.o 00:07:03.874 LINK esnap 00:07:04.133 00:07:04.133 real 1m39.661s 00:07:04.133 user 9m18.761s 00:07:04.133 sys 1m45.681s 00:07:04.133 11:19:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:04.133 ************************************ 00:07:04.133 END TEST make 00:07:04.133 ************************************ 00:07:04.133 11:19:11 make -- common/autotest_common.sh@10 -- $ set +x 00:07:04.133 11:19:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:04.133 11:19:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:04.133 11:19:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:04.133 11:19:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:04.133 11:19:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:04.133 11:19:11 -- pm/common@44 -- $ pid=5249 00:07:04.133 11:19:11 -- pm/common@50 -- $ kill -TERM 5249 00:07:04.133 11:19:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:04.133 11:19:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:04.133 11:19:11 -- pm/common@44 -- $ pid=5251 00:07:04.134 11:19:11 -- pm/common@50 -- $ kill -TERM 5251 00:07:04.134 11:19:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:04.134 11:19:11 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:04.393 11:19:12 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.393 11:19:12 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.393 11:19:12 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.393 11:19:12 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.393 11:19:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.393 11:19:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.393 11:19:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.393 11:19:12 -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.393 11:19:12 -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.393 11:19:12 -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.393 11:19:12 -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.393 11:19:12 -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.393 11:19:12 -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.393 11:19:12 -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.393 11:19:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.393 11:19:12 -- scripts/common.sh@344 -- # case "$op" in 00:07:04.393 11:19:12 -- scripts/common.sh@345 -- # : 1 00:07:04.393 11:19:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.393 11:19:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.393 11:19:12 -- scripts/common.sh@365 -- # decimal 1 00:07:04.393 11:19:12 -- scripts/common.sh@353 -- # local d=1 00:07:04.393 11:19:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.393 11:19:12 -- scripts/common.sh@355 -- # echo 1 00:07:04.393 11:19:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.393 11:19:12 -- scripts/common.sh@366 -- # decimal 2 00:07:04.393 11:19:12 -- scripts/common.sh@353 -- # local d=2 00:07:04.393 11:19:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.393 11:19:12 -- scripts/common.sh@355 -- # echo 2 00:07:04.394 11:19:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.394 11:19:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.394 11:19:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.394 11:19:12 -- scripts/common.sh@368 -- # return 0 00:07:04.394 11:19:12 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.394 11:19:12 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.394 --rc genhtml_branch_coverage=1 00:07:04.394 --rc genhtml_function_coverage=1 00:07:04.394 --rc genhtml_legend=1 00:07:04.394 --rc geninfo_all_blocks=1 00:07:04.394 --rc geninfo_unexecuted_blocks=1 00:07:04.394 00:07:04.394 ' 00:07:04.394 11:19:12 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.394 --rc genhtml_branch_coverage=1 00:07:04.394 --rc genhtml_function_coverage=1 00:07:04.394 --rc genhtml_legend=1 00:07:04.394 --rc geninfo_all_blocks=1 00:07:04.394 --rc geninfo_unexecuted_blocks=1 00:07:04.394 00:07:04.394 ' 00:07:04.394 11:19:12 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.394 --rc genhtml_branch_coverage=1 00:07:04.394 --rc genhtml_function_coverage=1 00:07:04.394 --rc genhtml_legend=1 00:07:04.394 --rc geninfo_all_blocks=1 00:07:04.394 --rc geninfo_unexecuted_blocks=1 00:07:04.394 00:07:04.394 ' 00:07:04.394 11:19:12 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.394 --rc genhtml_branch_coverage=1 00:07:04.394 --rc genhtml_function_coverage=1 00:07:04.394 --rc genhtml_legend=1 00:07:04.394 --rc geninfo_all_blocks=1 00:07:04.394 --rc geninfo_unexecuted_blocks=1 00:07:04.394 00:07:04.394 ' 00:07:04.394 11:19:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.394 11:19:12 -- nvmf/common.sh@7 -- # uname -s 00:07:04.394 11:19:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.394 11:19:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.394 11:19:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.394 11:19:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.394 11:19:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.394 11:19:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.394 11:19:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.394 11:19:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.394 11:19:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.394 11:19:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.394 11:19:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:efe9994e-228a-4e9b-98c1-203c146486a7 00:07:04.394 11:19:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=efe9994e-228a-4e9b-98c1-203c146486a7 00:07:04.394 11:19:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.394 11:19:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.394 11:19:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:04.394 11:19:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.394 11:19:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.394 11:19:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.394 11:19:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.394 11:19:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.394 11:19:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.394 11:19:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.394 11:19:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.394 11:19:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.394 11:19:12 -- paths/export.sh@5 -- # export PATH 00:07:04.394 11:19:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.394 11:19:12 -- nvmf/common.sh@51 -- # : 0 00:07:04.394 11:19:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.394 11:19:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.394 11:19:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.394 11:19:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.394 11:19:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.394 11:19:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.394 11:19:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.394 11:19:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.394 11:19:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.394 11:19:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:04.394 11:19:12 -- spdk/autotest.sh@32 -- # uname -s 00:07:04.394 11:19:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:04.394 11:19:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:04.394 11:19:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:04.394 11:19:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:04.394 11:19:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:04.394 11:19:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:04.653 11:19:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:04.653 11:19:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:04.653 11:19:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54331 00:07:04.653 11:19:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:04.653 11:19:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:04.653 11:19:12 -- pm/common@17 -- # local monitor 00:07:04.653 11:19:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:04.653 11:19:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:04.653 11:19:12 -- pm/common@25 -- # sleep 1 00:07:04.653 11:19:12 -- pm/common@21 -- # date +%s 00:07:04.653 11:19:12 -- pm/common@21 -- # date +%s 00:07:04.653 11:19:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101552 00:07:04.653 11:19:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101552 00:07:04.653 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101552_collect-cpu-load.pm.log 00:07:04.653 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101552_collect-vmstat.pm.log 00:07:05.588 11:19:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:05.588 11:19:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:05.588 11:19:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.588 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:07:05.588 11:19:13 -- spdk/autotest.sh@59 -- # create_test_list 00:07:05.589 11:19:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:05.589 11:19:13 -- common/autotest_common.sh@10 -- # set +x 00:07:05.589 11:19:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:05.589 11:19:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:05.589 11:19:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:05.589 11:19:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:05.589 11:19:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:05.589 11:19:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:05.589 11:19:13 -- common/autotest_common.sh@1457 -- # uname 00:07:05.589 11:19:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:05.589 11:19:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:05.589 11:19:13 -- common/autotest_common.sh@1477 -- # uname 00:07:05.589 11:19:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:05.589 11:19:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:05.589 11:19:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:05.589 lcov: LCOV version 1.15 00:07:05.589 11:19:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:23.667 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:23.667 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:41.843 11:19:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:41.843 11:19:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.843 11:19:47 -- common/autotest_common.sh@10 -- # set +x 00:07:41.843 11:19:47 -- spdk/autotest.sh@78 -- # rm -f 00:07:41.843 11:19:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:41.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.843 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:41.843 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:41.843 11:19:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:41.843 11:19:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:41.843 11:19:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:41.843 11:19:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:41.843 11:19:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.843 11:19:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:41.843 11:19:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:41.843 11:19:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:41.843 11:19:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.843 11:19:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.843 11:19:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:41.843 11:19:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:41.843 11:19:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:41.843 11:19:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.843 11:19:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.843 11:19:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:07:41.843 11:19:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:07:41.843 11:19:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:41.843 11:19:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.843 11:19:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:41.843 11:19:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:07:41.843 11:19:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:07:41.843 11:19:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:41.843 11:19:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:41.844 11:19:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:41.844 11:19:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:41.844 11:19:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:41.844 11:19:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:41.844 11:19:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:41.844 11:19:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:41.844 No valid GPT data, bailing 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # pt= 00:07:41.844 11:19:48 -- scripts/common.sh@395 -- # return 1 00:07:41.844 11:19:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:41.844 1+0 records in 00:07:41.844 1+0 records out 00:07:41.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403792 s, 260 MB/s 00:07:41.844 11:19:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:41.844 11:19:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:41.844 11:19:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:41.844 11:19:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:41.844 11:19:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:41.844 No valid GPT data, bailing 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # pt= 00:07:41.844 11:19:48 -- scripts/common.sh@395 -- # return 1 00:07:41.844 11:19:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:41.844 1+0 records in 00:07:41.844 1+0 records out 00:07:41.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381347 s, 275 MB/s 00:07:41.844 11:19:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:41.844 11:19:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:41.844 11:19:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:41.844 11:19:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:41.844 11:19:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:41.844 No valid GPT data, bailing 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # pt= 00:07:41.844 11:19:48 -- scripts/common.sh@395 -- # return 1 00:07:41.844 11:19:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:41.844 1+0 records in 00:07:41.844 1+0 records out 00:07:41.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463309 s, 226 MB/s 00:07:41.844 11:19:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:41.844 11:19:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:41.844 11:19:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:41.844 11:19:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:41.844 11:19:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:41.844 No valid GPT data, bailing 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:41.844 11:19:48 -- scripts/common.sh@394 -- # pt= 00:07:41.844 11:19:48 -- scripts/common.sh@395 -- # return 1 00:07:41.844 11:19:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:41.844 1+0 records in 00:07:41.844 1+0 records out 00:07:41.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444922 s, 236 MB/s 00:07:41.844 11:19:48 -- spdk/autotest.sh@105 -- # sync 00:07:41.844 11:19:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:41.844 11:19:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:41.844 11:19:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:42.781 11:19:50 -- spdk/autotest.sh@111 -- # uname -s 00:07:42.781 11:19:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:42.781 11:19:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:42.781 11:19:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:43.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.349 Hugepages 00:07:43.349 node hugesize free / total 00:07:43.349 node0 1048576kB 0 / 0 00:07:43.349 node0 2048kB 0 / 0 00:07:43.349 00:07:43.349 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:43.349 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:43.607 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:43.607 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:43.607 11:19:51 -- spdk/autotest.sh@117 -- # uname -s 00:07:43.607 11:19:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:43.607 11:19:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:43.607 11:19:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:44.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:44.431 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.431 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:44.431 11:19:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:45.366 11:19:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:45.366 11:19:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:45.366 11:19:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:45.366 11:19:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:45.366 11:19:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:45.366 11:19:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:45.366 11:19:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:45.366 11:19:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:45.366 11:19:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:45.625 11:19:53 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:45.625 11:19:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:45.625 11:19:53 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:45.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.884 Waiting for block devices as requested 00:07:45.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.884 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.142 11:19:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:46.142 11:19:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:46.142 11:19:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:46.142 11:19:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:46.142 11:19:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:46.142 11:19:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1543 -- # continue 00:07:46.142 11:19:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:46.142 11:19:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:46.142 11:19:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:46.142 11:19:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:46.142 11:19:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:46.142 11:19:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:46.142 11:19:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:46.142 11:19:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:46.142 11:19:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:46.142 11:19:53 -- common/autotest_common.sh@1543 -- # continue 00:07:46.142 11:19:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:46.142 11:19:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.142 11:19:53 -- common/autotest_common.sh@10 -- # set +x 00:07:46.142 11:19:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:46.142 11:19:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.142 11:19:53 -- common/autotest_common.sh@10 -- # set +x 00:07:46.142 11:19:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:46.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.967 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.967 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.967 11:19:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:46.967 11:19:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.967 11:19:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.967 11:19:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:46.967 11:19:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:46.967 11:19:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:46.967 11:19:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:46.967 11:19:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:46.967 11:19:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:46.967 11:19:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:46.967 11:19:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:46.967 11:19:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:46.967 11:19:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:46.967 11:19:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:47.226 11:19:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.226 11:19:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:47.226 11:19:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:47.226 11:19:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:47.226 11:19:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:47.226 11:19:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:47.226 11:19:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:47.226 11:19:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:47.226 11:19:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:47.226 11:19:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:47.226 11:19:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:47.226 11:19:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:47.226 11:19:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:47.226 11:19:54 -- common/autotest_common.sh@1572 -- # return 0 00:07:47.226 11:19:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:47.226 11:19:54 -- common/autotest_common.sh@1580 -- # return 0 00:07:47.226 11:19:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:47.226 11:19:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:47.226 11:19:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:47.226 11:19:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:47.226 11:19:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:47.226 11:19:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.226 11:19:54 -- common/autotest_common.sh@10 -- # set +x 00:07:47.226 11:19:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:47.226 11:19:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:47.226 11:19:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.226 11:19:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.226 11:19:54 -- common/autotest_common.sh@10 -- # set +x 00:07:47.226 ************************************ 00:07:47.226 START TEST env 00:07:47.226 ************************************ 00:07:47.226 11:19:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:47.226 * Looking for test storage... 00:07:47.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:47.226 11:19:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.226 11:19:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.226 11:19:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.226 11:19:55 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.226 11:19:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.226 11:19:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.226 11:19:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.226 11:19:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.226 11:19:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.226 11:19:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.226 11:19:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.226 11:19:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.226 11:19:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.226 11:19:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.226 11:19:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.226 11:19:55 env -- scripts/common.sh@344 -- # case "$op" in 00:07:47.226 11:19:55 env -- scripts/common.sh@345 -- # : 1 00:07:47.226 11:19:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.226 11:19:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.226 11:19:55 env -- scripts/common.sh@365 -- # decimal 1 00:07:47.226 11:19:55 env -- scripts/common.sh@353 -- # local d=1 00:07:47.226 11:19:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.226 11:19:55 env -- scripts/common.sh@355 -- # echo 1 00:07:47.226 11:19:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.226 11:19:55 env -- scripts/common.sh@366 -- # decimal 2 00:07:47.226 11:19:55 env -- scripts/common.sh@353 -- # local d=2 00:07:47.226 11:19:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.484 11:19:55 env -- scripts/common.sh@355 -- # echo 2 00:07:47.484 11:19:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.484 11:19:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.484 11:19:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.484 11:19:55 env -- scripts/common.sh@368 -- # return 0 00:07:47.484 11:19:55 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.484 11:19:55 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.484 --rc genhtml_branch_coverage=1 00:07:47.484 --rc genhtml_function_coverage=1 00:07:47.484 --rc genhtml_legend=1 00:07:47.484 --rc geninfo_all_blocks=1 00:07:47.484 --rc geninfo_unexecuted_blocks=1 00:07:47.484 00:07:47.484 ' 00:07:47.484 11:19:55 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.484 --rc genhtml_branch_coverage=1 00:07:47.484 --rc genhtml_function_coverage=1 00:07:47.484 --rc genhtml_legend=1 00:07:47.484 --rc geninfo_all_blocks=1 00:07:47.484 --rc geninfo_unexecuted_blocks=1 00:07:47.484 00:07:47.484 ' 00:07:47.484 11:19:55 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.484 --rc genhtml_branch_coverage=1 00:07:47.484 --rc genhtml_function_coverage=1 00:07:47.484 --rc genhtml_legend=1 00:07:47.484 --rc geninfo_all_blocks=1 00:07:47.484 --rc geninfo_unexecuted_blocks=1 00:07:47.484 00:07:47.484 ' 00:07:47.485 11:19:55 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.485 --rc genhtml_branch_coverage=1 00:07:47.485 --rc genhtml_function_coverage=1 00:07:47.485 --rc genhtml_legend=1 00:07:47.485 --rc geninfo_all_blocks=1 00:07:47.485 --rc geninfo_unexecuted_blocks=1 00:07:47.485 00:07:47.485 ' 00:07:47.485 11:19:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:47.485 11:19:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.485 11:19:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.485 11:19:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:47.485 ************************************ 00:07:47.485 START TEST env_memory 00:07:47.485 ************************************ 00:07:47.485 11:19:55 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:47.485 00:07:47.485 00:07:47.485 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.485 http://cunit.sourceforge.net/ 00:07:47.485 00:07:47.485 00:07:47.485 Suite: memory 00:07:47.485 Test: alloc and free memory map ...[2024-11-20 11:19:55.141029] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:47.485 passed 00:07:47.485 Test: mem map translation ...[2024-11-20 11:19:55.188490] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:47.485 [2024-11-20 11:19:55.188551] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:47.485 [2024-11-20 11:19:55.188635] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:47.485 [2024-11-20 11:19:55.188681] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:47.485 passed 00:07:47.485 Test: mem map registration ...[2024-11-20 11:19:55.267701] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:47.485 [2024-11-20 11:19:55.267774] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:47.485 passed 00:07:47.743 Test: mem map adjacent registrations ...passed 00:07:47.743 00:07:47.743 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.743 suites 1 1 n/a 0 0 00:07:47.743 tests 4 4 4 0 0 00:07:47.743 asserts 152 152 152 0 n/a 00:07:47.743 00:07:47.743 Elapsed time = 0.272 seconds 00:07:47.743 00:07:47.743 real 0m0.310s 00:07:47.743 user 0m0.277s 00:07:47.743 sys 0m0.025s 00:07:47.743 11:19:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.743 11:19:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:47.743 ************************************ 00:07:47.743 END TEST env_memory 00:07:47.743 ************************************ 00:07:47.743 11:19:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:47.743 11:19:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.743 11:19:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.743 11:19:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:47.743 ************************************ 00:07:47.743 START TEST env_vtophys 00:07:47.743 ************************************ 00:07:47.743 11:19:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:47.743 EAL: lib.eal log level changed from notice to debug 00:07:47.743 EAL: Detected lcore 0 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 1 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 2 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 3 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 4 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 5 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 6 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 7 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 8 as core 0 on socket 0 00:07:47.743 EAL: Detected lcore 9 as core 0 on socket 0 00:07:47.743 EAL: Maximum logical cores by configuration: 128 00:07:47.743 EAL: Detected CPU lcores: 10 00:07:47.743 EAL: Detected NUMA nodes: 1 00:07:47.743 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:47.743 EAL: Detected shared linkage of DPDK 00:07:47.743 EAL: No shared files mode enabled, IPC will be disabled 00:07:47.743 EAL: Selected IOVA mode 'PA' 00:07:47.743 EAL: Probing VFIO support... 00:07:47.743 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:47.743 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:47.743 EAL: Ask a virtual area of 0x2e000 bytes 00:07:47.743 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:47.743 EAL: Setting up physically contiguous memory... 00:07:47.743 EAL: Setting maximum number of open files to 524288 00:07:47.743 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:47.743 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:47.743 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.743 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:47.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.743 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.743 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:47.743 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:47.743 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.743 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:47.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.743 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.743 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:47.743 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:47.743 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.743 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:47.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.743 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.743 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:47.743 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:47.743 EAL: Ask a virtual area of 0x61000 bytes 00:07:47.743 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:47.743 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:47.743 EAL: Ask a virtual area of 0x400000000 bytes 00:07:47.743 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:47.743 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:47.743 EAL: Hugepages will be freed exactly as allocated. 00:07:47.744 EAL: No shared files mode enabled, IPC is disabled 00:07:47.744 EAL: No shared files mode enabled, IPC is disabled 00:07:48.002 EAL: TSC frequency is ~2200000 KHz 00:07:48.002 EAL: Main lcore 0 is ready (tid=7f4ccec9aa40;cpuset=[0]) 00:07:48.002 EAL: Trying to obtain current memory policy. 00:07:48.002 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.002 EAL: Restoring previous memory policy: 0 00:07:48.002 EAL: request: mp_malloc_sync 00:07:48.002 EAL: No shared files mode enabled, IPC is disabled 00:07:48.002 EAL: Heap on socket 0 was expanded by 2MB 00:07:48.002 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:48.002 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:48.002 EAL: Mem event callback 'spdk:(nil)' registered 00:07:48.002 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:48.002 00:07:48.002 00:07:48.002 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.002 http://cunit.sourceforge.net/ 00:07:48.002 00:07:48.002 00:07:48.002 Suite: components_suite 00:07:48.569 Test: vtophys_malloc_test ...passed 00:07:48.570 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:48.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.570 EAL: Restoring previous memory policy: 4 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was expanded by 4MB 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was shrunk by 4MB 00:07:48.570 EAL: Trying to obtain current memory policy. 00:07:48.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.570 EAL: Restoring previous memory policy: 4 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was expanded by 6MB 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was shrunk by 6MB 00:07:48.570 EAL: Trying to obtain current memory policy. 00:07:48.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.570 EAL: Restoring previous memory policy: 4 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was expanded by 10MB 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was shrunk by 10MB 00:07:48.570 EAL: Trying to obtain current memory policy. 00:07:48.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.570 EAL: Restoring previous memory policy: 4 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was expanded by 18MB 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was shrunk by 18MB 00:07:48.570 EAL: Trying to obtain current memory policy. 00:07:48.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.570 EAL: Restoring previous memory policy: 4 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was expanded by 34MB 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was shrunk by 34MB 00:07:48.570 EAL: Trying to obtain current memory policy. 00:07:48.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.570 EAL: Restoring previous memory policy: 4 00:07:48.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.570 EAL: request: mp_malloc_sync 00:07:48.570 EAL: No shared files mode enabled, IPC is disabled 00:07:48.570 EAL: Heap on socket 0 was expanded by 66MB 00:07:48.829 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.829 EAL: request: mp_malloc_sync 00:07:48.829 EAL: No shared files mode enabled, IPC is disabled 00:07:48.829 EAL: Heap on socket 0 was shrunk by 66MB 00:07:48.829 EAL: Trying to obtain current memory policy. 00:07:48.829 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.829 EAL: Restoring previous memory policy: 4 00:07:48.829 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.829 EAL: request: mp_malloc_sync 00:07:48.829 EAL: No shared files mode enabled, IPC is disabled 00:07:48.829 EAL: Heap on socket 0 was expanded by 130MB 00:07:49.087 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.087 EAL: request: mp_malloc_sync 00:07:49.087 EAL: No shared files mode enabled, IPC is disabled 00:07:49.087 EAL: Heap on socket 0 was shrunk by 130MB 00:07:49.346 EAL: Trying to obtain current memory policy. 00:07:49.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.346 EAL: Restoring previous memory policy: 4 00:07:49.346 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.346 EAL: request: mp_malloc_sync 00:07:49.346 EAL: No shared files mode enabled, IPC is disabled 00:07:49.346 EAL: Heap on socket 0 was expanded by 258MB 00:07:49.913 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.913 EAL: request: mp_malloc_sync 00:07:49.913 EAL: No shared files mode enabled, IPC is disabled 00:07:49.913 EAL: Heap on socket 0 was shrunk by 258MB 00:07:50.172 EAL: Trying to obtain current memory policy. 00:07:50.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.431 EAL: Restoring previous memory policy: 4 00:07:50.431 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.431 EAL: request: mp_malloc_sync 00:07:50.431 EAL: No shared files mode enabled, IPC is disabled 00:07:50.431 EAL: Heap on socket 0 was expanded by 514MB 00:07:51.369 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.369 EAL: request: mp_malloc_sync 00:07:51.369 EAL: No shared files mode enabled, IPC is disabled 00:07:51.369 EAL: Heap on socket 0 was shrunk by 514MB 00:07:51.938 EAL: Trying to obtain current memory policy. 00:07:51.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:52.198 EAL: Restoring previous memory policy: 4 00:07:52.198 EAL: Calling mem event callback 'spdk:(nil)' 00:07:52.198 EAL: request: mp_malloc_sync 00:07:52.198 EAL: No shared files mode enabled, IPC is disabled 00:07:52.198 EAL: Heap on socket 0 was expanded by 1026MB 00:07:54.114 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.114 EAL: request: mp_malloc_sync 00:07:54.114 EAL: No shared files mode enabled, IPC is disabled 00:07:54.114 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:56.027 passed 00:07:56.027 00:07:56.027 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.027 suites 1 1 n/a 0 0 00:07:56.027 tests 2 2 2 0 0 00:07:56.027 asserts 5740 5740 5740 0 n/a 00:07:56.027 00:07:56.027 Elapsed time = 7.626 seconds 00:07:56.027 EAL: Calling mem event callback 'spdk:(nil)' 00:07:56.027 EAL: request: mp_malloc_sync 00:07:56.027 EAL: No shared files mode enabled, IPC is disabled 00:07:56.027 EAL: Heap on socket 0 was shrunk by 2MB 00:07:56.027 EAL: No shared files mode enabled, IPC is disabled 00:07:56.027 EAL: No shared files mode enabled, IPC is disabled 00:07:56.027 EAL: No shared files mode enabled, IPC is disabled 00:07:56.027 00:07:56.027 real 0m7.966s 00:07:56.027 user 0m6.781s 00:07:56.027 sys 0m1.017s 00:07:56.027 11:20:03 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.027 11:20:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:56.027 ************************************ 00:07:56.027 END TEST env_vtophys 00:07:56.027 ************************************ 00:07:56.027 11:20:03 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:56.027 11:20:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.027 11:20:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.027 11:20:03 env -- common/autotest_common.sh@10 -- # set +x 00:07:56.027 ************************************ 00:07:56.027 START TEST env_pci 00:07:56.027 ************************************ 00:07:56.028 11:20:03 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:56.028 00:07:56.028 00:07:56.028 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.028 http://cunit.sourceforge.net/ 00:07:56.028 00:07:56.028 00:07:56.028 Suite: pci 00:07:56.028 Test: pci_hook ...[2024-11-20 11:20:03.484980] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56660 has claimed it 00:07:56.028 passed 00:07:56.028 00:07:56.028 EAL: Cannot find device (10000:00:01.0) 00:07:56.028 EAL: Failed to attach device on primary process 00:07:56.028 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.028 suites 1 1 n/a 0 0 00:07:56.028 tests 1 1 1 0 0 00:07:56.028 asserts 25 25 25 0 n/a 00:07:56.028 00:07:56.028 Elapsed time = 0.007 seconds 00:07:56.028 00:07:56.028 real 0m0.083s 00:07:56.028 user 0m0.047s 00:07:56.028 sys 0m0.035s 00:07:56.028 11:20:03 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.028 11:20:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:56.028 ************************************ 00:07:56.028 END TEST env_pci 00:07:56.028 ************************************ 00:07:56.028 11:20:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:56.028 11:20:03 env -- env/env.sh@15 -- # uname 00:07:56.028 11:20:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:56.028 11:20:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:56.028 11:20:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:56.028 11:20:03 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.028 11:20:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.028 11:20:03 env -- common/autotest_common.sh@10 -- # set +x 00:07:56.028 ************************************ 00:07:56.028 START TEST env_dpdk_post_init 00:07:56.028 ************************************ 00:07:56.028 11:20:03 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:56.028 EAL: Detected CPU lcores: 10 00:07:56.028 EAL: Detected NUMA nodes: 1 00:07:56.028 EAL: Detected shared linkage of DPDK 00:07:56.028 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:56.028 EAL: Selected IOVA mode 'PA' 00:07:56.028 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:56.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:56.028 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:56.287 Starting DPDK initialization... 00:07:56.287 Starting SPDK post initialization... 00:07:56.287 SPDK NVMe probe 00:07:56.287 Attaching to 0000:00:10.0 00:07:56.287 Attaching to 0000:00:11.0 00:07:56.287 Attached to 0000:00:10.0 00:07:56.287 Attached to 0000:00:11.0 00:07:56.287 Cleaning up... 00:07:56.287 00:07:56.287 real 0m0.310s 00:07:56.287 user 0m0.107s 00:07:56.287 sys 0m0.101s 00:07:56.287 11:20:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.287 11:20:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:56.287 ************************************ 00:07:56.287 END TEST env_dpdk_post_init 00:07:56.287 ************************************ 00:07:56.287 11:20:03 env -- env/env.sh@26 -- # uname 00:07:56.287 11:20:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:56.287 11:20:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:56.287 11:20:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.287 11:20:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.287 11:20:03 env -- common/autotest_common.sh@10 -- # set +x 00:07:56.287 ************************************ 00:07:56.287 START TEST env_mem_callbacks 00:07:56.287 ************************************ 00:07:56.287 11:20:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:56.287 EAL: Detected CPU lcores: 10 00:07:56.287 EAL: Detected NUMA nodes: 1 00:07:56.287 EAL: Detected shared linkage of DPDK 00:07:56.287 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:56.287 EAL: Selected IOVA mode 'PA' 00:07:56.287 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:56.287 00:07:56.287 00:07:56.287 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.287 http://cunit.sourceforge.net/ 00:07:56.287 00:07:56.287 00:07:56.287 Suite: memory 00:07:56.287 Test: test ... 00:07:56.287 register 0x200000200000 2097152 00:07:56.287 malloc 3145728 00:07:56.287 register 0x200000400000 4194304 00:07:56.546 buf 0x2000004fffc0 len 3145728 PASSED 00:07:56.546 malloc 64 00:07:56.546 buf 0x2000004ffec0 len 64 PASSED 00:07:56.546 malloc 4194304 00:07:56.546 register 0x200000800000 6291456 00:07:56.546 buf 0x2000009fffc0 len 4194304 PASSED 00:07:56.546 free 0x2000004fffc0 3145728 00:07:56.546 free 0x2000004ffec0 64 00:07:56.546 unregister 0x200000400000 4194304 PASSED 00:07:56.546 free 0x2000009fffc0 4194304 00:07:56.546 unregister 0x200000800000 6291456 PASSED 00:07:56.546 malloc 8388608 00:07:56.546 register 0x200000400000 10485760 00:07:56.546 buf 0x2000005fffc0 len 8388608 PASSED 00:07:56.546 free 0x2000005fffc0 8388608 00:07:56.546 unregister 0x200000400000 10485760 PASSED 00:07:56.546 passed 00:07:56.546 00:07:56.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.546 suites 1 1 n/a 0 0 00:07:56.546 tests 1 1 1 0 0 00:07:56.546 asserts 15 15 15 0 n/a 00:07:56.546 00:07:56.546 Elapsed time = 0.079 seconds 00:07:56.546 00:07:56.546 real 0m0.287s 00:07:56.546 user 0m0.110s 00:07:56.547 sys 0m0.076s 00:07:56.547 11:20:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.547 11:20:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:56.547 ************************************ 00:07:56.547 END TEST env_mem_callbacks 00:07:56.547 ************************************ 00:07:56.547 00:07:56.547 real 0m9.373s 00:07:56.547 user 0m7.511s 00:07:56.547 sys 0m1.479s 00:07:56.547 11:20:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.547 11:20:04 env -- common/autotest_common.sh@10 -- # set +x 00:07:56.547 ************************************ 00:07:56.547 END TEST env 00:07:56.547 ************************************ 00:07:56.547 11:20:04 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:56.547 11:20:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.547 11:20:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.547 11:20:04 -- common/autotest_common.sh@10 -- # set +x 00:07:56.547 ************************************ 00:07:56.547 START TEST rpc 00:07:56.547 ************************************ 00:07:56.547 11:20:04 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:56.807 * Looking for test storage... 00:07:56.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.807 11:20:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.807 11:20:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.807 11:20:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.807 11:20:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.807 11:20:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.807 11:20:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.807 11:20:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.807 11:20:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:56.807 11:20:04 rpc -- scripts/common.sh@345 -- # : 1 00:07:56.807 11:20:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.807 11:20:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.807 11:20:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:56.807 11:20:04 rpc -- scripts/common.sh@353 -- # local d=1 00:07:56.807 11:20:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.807 11:20:04 rpc -- scripts/common.sh@355 -- # echo 1 00:07:56.807 11:20:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.807 11:20:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@353 -- # local d=2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.807 11:20:04 rpc -- scripts/common.sh@355 -- # echo 2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.807 11:20:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.807 11:20:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.807 11:20:04 rpc -- scripts/common.sh@368 -- # return 0 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.807 --rc genhtml_branch_coverage=1 00:07:56.807 --rc genhtml_function_coverage=1 00:07:56.807 --rc genhtml_legend=1 00:07:56.807 --rc geninfo_all_blocks=1 00:07:56.807 --rc geninfo_unexecuted_blocks=1 00:07:56.807 00:07:56.807 ' 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.807 --rc genhtml_branch_coverage=1 00:07:56.807 --rc genhtml_function_coverage=1 00:07:56.807 --rc genhtml_legend=1 00:07:56.807 --rc geninfo_all_blocks=1 00:07:56.807 --rc geninfo_unexecuted_blocks=1 00:07:56.807 00:07:56.807 ' 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.807 --rc genhtml_branch_coverage=1 00:07:56.807 --rc genhtml_function_coverage=1 00:07:56.807 --rc genhtml_legend=1 00:07:56.807 --rc geninfo_all_blocks=1 00:07:56.807 --rc geninfo_unexecuted_blocks=1 00:07:56.807 00:07:56.807 ' 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.807 --rc genhtml_branch_coverage=1 00:07:56.807 --rc genhtml_function_coverage=1 00:07:56.807 --rc genhtml_legend=1 00:07:56.807 --rc geninfo_all_blocks=1 00:07:56.807 --rc geninfo_unexecuted_blocks=1 00:07:56.807 00:07:56.807 ' 00:07:56.807 11:20:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56787 00:07:56.807 11:20:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:56.807 11:20:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56787 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 56787 ']' 00:07:56.807 11:20:04 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.807 11:20:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.066 [2024-11-20 11:20:04.653813] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:07:57.066 [2024-11-20 11:20:04.654006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56787 ] 00:07:57.066 [2024-11-20 11:20:04.889368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.325 [2024-11-20 11:20:05.050074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:57.325 [2024-11-20 11:20:05.050153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56787' to capture a snapshot of events at runtime. 00:07:57.325 [2024-11-20 11:20:05.050173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.325 [2024-11-20 11:20:05.050191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.325 [2024-11-20 11:20:05.050204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56787 for offline analysis/debug. 00:07:57.325 [2024-11-20 11:20:05.051850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.305 11:20:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.305 11:20:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:58.305 11:20:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:58.305 11:20:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:58.305 11:20:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:58.305 11:20:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:58.305 11:20:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.305 11:20:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.305 11:20:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.305 ************************************ 00:07:58.305 START TEST rpc_integrity 00:07:58.305 ************************************ 00:07:58.305 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:58.305 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:58.305 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.305 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.305 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.305 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:58.305 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:58.305 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:58.305 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:58.305 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.305 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:58.564 { 00:07:58.564 "name": "Malloc0", 00:07:58.564 "aliases": [ 00:07:58.564 "1385ab04-bbb4-4d53-8ab0-2313d8fcd438" 00:07:58.564 ], 00:07:58.564 "product_name": "Malloc disk", 00:07:58.564 "block_size": 512, 00:07:58.564 "num_blocks": 16384, 00:07:58.564 "uuid": "1385ab04-bbb4-4d53-8ab0-2313d8fcd438", 00:07:58.564 "assigned_rate_limits": { 00:07:58.564 "rw_ios_per_sec": 0, 00:07:58.564 "rw_mbytes_per_sec": 0, 00:07:58.564 "r_mbytes_per_sec": 0, 00:07:58.564 "w_mbytes_per_sec": 0 00:07:58.564 }, 00:07:58.564 "claimed": false, 00:07:58.564 "zoned": false, 00:07:58.564 "supported_io_types": { 00:07:58.564 "read": true, 00:07:58.564 "write": true, 00:07:58.564 "unmap": true, 00:07:58.564 "flush": true, 00:07:58.564 "reset": true, 00:07:58.564 "nvme_admin": false, 00:07:58.564 "nvme_io": false, 00:07:58.564 "nvme_io_md": false, 00:07:58.564 "write_zeroes": true, 00:07:58.564 "zcopy": true, 00:07:58.564 "get_zone_info": false, 00:07:58.564 "zone_management": false, 00:07:58.564 "zone_append": false, 00:07:58.564 "compare": false, 00:07:58.564 "compare_and_write": false, 00:07:58.564 "abort": true, 00:07:58.564 "seek_hole": false, 00:07:58.564 "seek_data": false, 00:07:58.564 "copy": true, 00:07:58.564 "nvme_iov_md": false 00:07:58.564 }, 00:07:58.564 "memory_domains": [ 00:07:58.564 { 00:07:58.564 "dma_device_id": "system", 00:07:58.564 "dma_device_type": 1 00:07:58.564 }, 00:07:58.564 { 00:07:58.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.564 "dma_device_type": 2 00:07:58.564 } 00:07:58.564 ], 00:07:58.564 "driver_specific": {} 00:07:58.564 } 00:07:58.564 ]' 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.564 [2024-11-20 11:20:06.252347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:58.564 [2024-11-20 11:20:06.252444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.564 [2024-11-20 11:20:06.252478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:58.564 [2024-11-20 11:20:06.252501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.564 [2024-11-20 11:20:06.255946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.564 [2024-11-20 11:20:06.256000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:58.564 Passthru0 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.564 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.564 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:58.564 { 00:07:58.564 "name": "Malloc0", 00:07:58.564 "aliases": [ 00:07:58.564 "1385ab04-bbb4-4d53-8ab0-2313d8fcd438" 00:07:58.564 ], 00:07:58.564 "product_name": "Malloc disk", 00:07:58.564 "block_size": 512, 00:07:58.564 "num_blocks": 16384, 00:07:58.564 "uuid": "1385ab04-bbb4-4d53-8ab0-2313d8fcd438", 00:07:58.564 "assigned_rate_limits": { 00:07:58.564 "rw_ios_per_sec": 0, 00:07:58.564 "rw_mbytes_per_sec": 0, 00:07:58.564 "r_mbytes_per_sec": 0, 00:07:58.564 "w_mbytes_per_sec": 0 00:07:58.564 }, 00:07:58.564 "claimed": true, 00:07:58.564 "claim_type": "exclusive_write", 00:07:58.564 "zoned": false, 00:07:58.564 "supported_io_types": { 00:07:58.564 "read": true, 00:07:58.564 "write": true, 00:07:58.564 "unmap": true, 00:07:58.565 "flush": true, 00:07:58.565 "reset": true, 00:07:58.565 "nvme_admin": false, 00:07:58.565 "nvme_io": false, 00:07:58.565 "nvme_io_md": false, 00:07:58.565 "write_zeroes": true, 00:07:58.565 "zcopy": true, 00:07:58.565 "get_zone_info": false, 00:07:58.565 "zone_management": false, 00:07:58.565 "zone_append": false, 00:07:58.565 "compare": false, 00:07:58.565 "compare_and_write": false, 00:07:58.565 "abort": true, 00:07:58.565 "seek_hole": false, 00:07:58.565 "seek_data": false, 00:07:58.565 "copy": true, 00:07:58.565 "nvme_iov_md": false 00:07:58.565 }, 00:07:58.565 "memory_domains": [ 00:07:58.565 { 00:07:58.565 "dma_device_id": "system", 00:07:58.565 "dma_device_type": 1 00:07:58.565 }, 00:07:58.565 { 00:07:58.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.565 "dma_device_type": 2 00:07:58.565 } 00:07:58.565 ], 00:07:58.565 "driver_specific": {} 00:07:58.565 }, 00:07:58.565 { 00:07:58.565 "name": "Passthru0", 00:07:58.565 "aliases": [ 00:07:58.565 "9fe3d88a-4721-5f15-8ab3-a8b7889cf80a" 00:07:58.565 ], 00:07:58.565 "product_name": "passthru", 00:07:58.565 "block_size": 512, 00:07:58.565 "num_blocks": 16384, 00:07:58.565 "uuid": "9fe3d88a-4721-5f15-8ab3-a8b7889cf80a", 00:07:58.565 "assigned_rate_limits": { 00:07:58.565 "rw_ios_per_sec": 0, 00:07:58.565 "rw_mbytes_per_sec": 0, 00:07:58.565 "r_mbytes_per_sec": 0, 00:07:58.565 "w_mbytes_per_sec": 0 00:07:58.565 }, 00:07:58.565 "claimed": false, 00:07:58.565 "zoned": false, 00:07:58.565 "supported_io_types": { 00:07:58.565 "read": true, 00:07:58.565 "write": true, 00:07:58.565 "unmap": true, 00:07:58.565 "flush": true, 00:07:58.565 "reset": true, 00:07:58.565 "nvme_admin": false, 00:07:58.565 "nvme_io": false, 00:07:58.565 "nvme_io_md": false, 00:07:58.565 "write_zeroes": true, 00:07:58.565 "zcopy": true, 00:07:58.565 "get_zone_info": false, 00:07:58.565 "zone_management": false, 00:07:58.565 "zone_append": false, 00:07:58.565 "compare": false, 00:07:58.565 "compare_and_write": false, 00:07:58.565 "abort": true, 00:07:58.565 "seek_hole": false, 00:07:58.565 "seek_data": false, 00:07:58.565 "copy": true, 00:07:58.565 "nvme_iov_md": false 00:07:58.565 }, 00:07:58.565 "memory_domains": [ 00:07:58.565 { 00:07:58.565 "dma_device_id": "system", 00:07:58.565 "dma_device_type": 1 00:07:58.565 }, 00:07:58.565 { 00:07:58.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.565 "dma_device_type": 2 00:07:58.565 } 00:07:58.565 ], 00:07:58.565 "driver_specific": { 00:07:58.565 "passthru": { 00:07:58.565 "name": "Passthru0", 00:07:58.565 "base_bdev_name": "Malloc0" 00:07:58.565 } 00:07:58.565 } 00:07:58.565 } 00:07:58.565 ]' 00:07:58.565 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:58.565 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:58.565 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.565 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.565 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.565 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.565 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:58.565 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:58.824 ************************************ 00:07:58.824 END TEST rpc_integrity 00:07:58.824 ************************************ 00:07:58.824 11:20:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:58.824 00:07:58.824 real 0m0.370s 00:07:58.824 user 0m0.228s 00:07:58.824 sys 0m0.044s 00:07:58.824 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.824 11:20:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.824 11:20:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:58.824 11:20:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.824 11:20:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.824 11:20:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.824 ************************************ 00:07:58.824 START TEST rpc_plugins 00:07:58.824 ************************************ 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:58.824 { 00:07:58.824 "name": "Malloc1", 00:07:58.824 "aliases": [ 00:07:58.824 "1c16bad4-55d9-4b11-91d0-34020a7eb2b2" 00:07:58.824 ], 00:07:58.824 "product_name": "Malloc disk", 00:07:58.824 "block_size": 4096, 00:07:58.824 "num_blocks": 256, 00:07:58.824 "uuid": "1c16bad4-55d9-4b11-91d0-34020a7eb2b2", 00:07:58.824 "assigned_rate_limits": { 00:07:58.824 "rw_ios_per_sec": 0, 00:07:58.824 "rw_mbytes_per_sec": 0, 00:07:58.824 "r_mbytes_per_sec": 0, 00:07:58.824 "w_mbytes_per_sec": 0 00:07:58.824 }, 00:07:58.824 "claimed": false, 00:07:58.824 "zoned": false, 00:07:58.824 "supported_io_types": { 00:07:58.824 "read": true, 00:07:58.824 "write": true, 00:07:58.824 "unmap": true, 00:07:58.824 "flush": true, 00:07:58.824 "reset": true, 00:07:58.824 "nvme_admin": false, 00:07:58.824 "nvme_io": false, 00:07:58.824 "nvme_io_md": false, 00:07:58.824 "write_zeroes": true, 00:07:58.824 "zcopy": true, 00:07:58.824 "get_zone_info": false, 00:07:58.824 "zone_management": false, 00:07:58.824 "zone_append": false, 00:07:58.824 "compare": false, 00:07:58.824 "compare_and_write": false, 00:07:58.824 "abort": true, 00:07:58.824 "seek_hole": false, 00:07:58.824 "seek_data": false, 00:07:58.824 "copy": true, 00:07:58.824 "nvme_iov_md": false 00:07:58.824 }, 00:07:58.824 "memory_domains": [ 00:07:58.824 { 00:07:58.824 "dma_device_id": "system", 00:07:58.824 "dma_device_type": 1 00:07:58.824 }, 00:07:58.824 { 00:07:58.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.824 "dma_device_type": 2 00:07:58.824 } 00:07:58.824 ], 00:07:58.824 "driver_specific": {} 00:07:58.824 } 00:07:58.824 ]' 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:58.824 ************************************ 00:07:58.824 END TEST rpc_plugins 00:07:58.824 ************************************ 00:07:58.824 11:20:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:58.824 00:07:58.824 real 0m0.158s 00:07:58.824 user 0m0.098s 00:07:58.824 sys 0m0.020s 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.824 11:20:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:59.084 11:20:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:59.084 11:20:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.084 11:20:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.084 11:20:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.084 ************************************ 00:07:59.084 START TEST rpc_trace_cmd_test 00:07:59.084 ************************************ 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:59.084 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56787", 00:07:59.084 "tpoint_group_mask": "0x8", 00:07:59.084 "iscsi_conn": { 00:07:59.084 "mask": "0x2", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "scsi": { 00:07:59.084 "mask": "0x4", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "bdev": { 00:07:59.084 "mask": "0x8", 00:07:59.084 "tpoint_mask": "0xffffffffffffffff" 00:07:59.084 }, 00:07:59.084 "nvmf_rdma": { 00:07:59.084 "mask": "0x10", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "nvmf_tcp": { 00:07:59.084 "mask": "0x20", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "ftl": { 00:07:59.084 "mask": "0x40", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "blobfs": { 00:07:59.084 "mask": "0x80", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "dsa": { 00:07:59.084 "mask": "0x200", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "thread": { 00:07:59.084 "mask": "0x400", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "nvme_pcie": { 00:07:59.084 "mask": "0x800", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "iaa": { 00:07:59.084 "mask": "0x1000", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "nvme_tcp": { 00:07:59.084 "mask": "0x2000", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "bdev_nvme": { 00:07:59.084 "mask": "0x4000", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "sock": { 00:07:59.084 "mask": "0x8000", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "blob": { 00:07:59.084 "mask": "0x10000", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "bdev_raid": { 00:07:59.084 "mask": "0x20000", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 }, 00:07:59.084 "scheduler": { 00:07:59.084 "mask": "0x40000", 00:07:59.084 "tpoint_mask": "0x0" 00:07:59.084 } 00:07:59.084 }' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:59.084 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:59.343 ************************************ 00:07:59.343 END TEST rpc_trace_cmd_test 00:07:59.343 ************************************ 00:07:59.343 11:20:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:59.343 00:07:59.343 real 0m0.272s 00:07:59.343 user 0m0.239s 00:07:59.343 sys 0m0.023s 00:07:59.343 11:20:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.343 11:20:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.343 11:20:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:59.343 11:20:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:59.343 11:20:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:59.343 11:20:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.343 11:20:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.343 11:20:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.343 ************************************ 00:07:59.343 START TEST rpc_daemon_integrity 00:07:59.343 ************************************ 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:59.343 { 00:07:59.343 "name": "Malloc2", 00:07:59.343 "aliases": [ 00:07:59.343 "d1185af4-00d5-473a-aab9-6c5be8d9f21a" 00:07:59.343 ], 00:07:59.343 "product_name": "Malloc disk", 00:07:59.343 "block_size": 512, 00:07:59.343 "num_blocks": 16384, 00:07:59.343 "uuid": "d1185af4-00d5-473a-aab9-6c5be8d9f21a", 00:07:59.343 "assigned_rate_limits": { 00:07:59.343 "rw_ios_per_sec": 0, 00:07:59.343 "rw_mbytes_per_sec": 0, 00:07:59.343 "r_mbytes_per_sec": 0, 00:07:59.343 "w_mbytes_per_sec": 0 00:07:59.343 }, 00:07:59.343 "claimed": false, 00:07:59.343 "zoned": false, 00:07:59.343 "supported_io_types": { 00:07:59.343 "read": true, 00:07:59.343 "write": true, 00:07:59.343 "unmap": true, 00:07:59.343 "flush": true, 00:07:59.343 "reset": true, 00:07:59.343 "nvme_admin": false, 00:07:59.343 "nvme_io": false, 00:07:59.343 "nvme_io_md": false, 00:07:59.343 "write_zeroes": true, 00:07:59.343 "zcopy": true, 00:07:59.343 "get_zone_info": false, 00:07:59.343 "zone_management": false, 00:07:59.343 "zone_append": false, 00:07:59.343 "compare": false, 00:07:59.343 "compare_and_write": false, 00:07:59.343 "abort": true, 00:07:59.343 "seek_hole": false, 00:07:59.343 "seek_data": false, 00:07:59.343 "copy": true, 00:07:59.343 "nvme_iov_md": false 00:07:59.343 }, 00:07:59.343 "memory_domains": [ 00:07:59.343 { 00:07:59.343 "dma_device_id": "system", 00:07:59.343 "dma_device_type": 1 00:07:59.343 }, 00:07:59.343 { 00:07:59.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.343 "dma_device_type": 2 00:07:59.343 } 00:07:59.343 ], 00:07:59.343 "driver_specific": {} 00:07:59.343 } 00:07:59.343 ]' 00:07:59.343 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:59.344 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:59.344 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:59.344 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.344 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.344 [2024-11-20 11:20:07.186518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:59.344 [2024-11-20 11:20:07.186598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.344 [2024-11-20 11:20:07.186644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:59.344 [2024-11-20 11:20:07.186664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.602 [2024-11-20 11:20:07.189655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.602 [2024-11-20 11:20:07.189705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:59.602 Passthru0 00:07:59.602 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.602 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:59.602 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.602 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.602 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.602 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:59.602 { 00:07:59.602 "name": "Malloc2", 00:07:59.602 "aliases": [ 00:07:59.602 "d1185af4-00d5-473a-aab9-6c5be8d9f21a" 00:07:59.602 ], 00:07:59.602 "product_name": "Malloc disk", 00:07:59.602 "block_size": 512, 00:07:59.602 "num_blocks": 16384, 00:07:59.602 "uuid": "d1185af4-00d5-473a-aab9-6c5be8d9f21a", 00:07:59.602 "assigned_rate_limits": { 00:07:59.602 "rw_ios_per_sec": 0, 00:07:59.602 "rw_mbytes_per_sec": 0, 00:07:59.602 "r_mbytes_per_sec": 0, 00:07:59.602 "w_mbytes_per_sec": 0 00:07:59.602 }, 00:07:59.602 "claimed": true, 00:07:59.602 "claim_type": "exclusive_write", 00:07:59.602 "zoned": false, 00:07:59.602 "supported_io_types": { 00:07:59.602 "read": true, 00:07:59.602 "write": true, 00:07:59.602 "unmap": true, 00:07:59.602 "flush": true, 00:07:59.602 "reset": true, 00:07:59.602 "nvme_admin": false, 00:07:59.602 "nvme_io": false, 00:07:59.602 "nvme_io_md": false, 00:07:59.602 "write_zeroes": true, 00:07:59.602 "zcopy": true, 00:07:59.602 "get_zone_info": false, 00:07:59.602 "zone_management": false, 00:07:59.602 "zone_append": false, 00:07:59.602 "compare": false, 00:07:59.602 "compare_and_write": false, 00:07:59.602 "abort": true, 00:07:59.602 "seek_hole": false, 00:07:59.602 "seek_data": false, 00:07:59.602 "copy": true, 00:07:59.602 "nvme_iov_md": false 00:07:59.602 }, 00:07:59.602 "memory_domains": [ 00:07:59.602 { 00:07:59.602 "dma_device_id": "system", 00:07:59.602 "dma_device_type": 1 00:07:59.602 }, 00:07:59.602 { 00:07:59.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.602 "dma_device_type": 2 00:07:59.602 } 00:07:59.602 ], 00:07:59.602 "driver_specific": {} 00:07:59.602 }, 00:07:59.602 { 00:07:59.602 "name": "Passthru0", 00:07:59.602 "aliases": [ 00:07:59.602 "8d5ad5fb-a3fc-51d6-b22a-714f45f6160b" 00:07:59.602 ], 00:07:59.602 "product_name": "passthru", 00:07:59.602 "block_size": 512, 00:07:59.602 "num_blocks": 16384, 00:07:59.602 "uuid": "8d5ad5fb-a3fc-51d6-b22a-714f45f6160b", 00:07:59.602 "assigned_rate_limits": { 00:07:59.602 "rw_ios_per_sec": 0, 00:07:59.602 "rw_mbytes_per_sec": 0, 00:07:59.602 "r_mbytes_per_sec": 0, 00:07:59.602 "w_mbytes_per_sec": 0 00:07:59.602 }, 00:07:59.602 "claimed": false, 00:07:59.602 "zoned": false, 00:07:59.602 "supported_io_types": { 00:07:59.602 "read": true, 00:07:59.602 "write": true, 00:07:59.602 "unmap": true, 00:07:59.602 "flush": true, 00:07:59.602 "reset": true, 00:07:59.602 "nvme_admin": false, 00:07:59.602 "nvme_io": false, 00:07:59.602 "nvme_io_md": false, 00:07:59.602 "write_zeroes": true, 00:07:59.602 "zcopy": true, 00:07:59.602 "get_zone_info": false, 00:07:59.603 "zone_management": false, 00:07:59.603 "zone_append": false, 00:07:59.603 "compare": false, 00:07:59.603 "compare_and_write": false, 00:07:59.603 "abort": true, 00:07:59.603 "seek_hole": false, 00:07:59.603 "seek_data": false, 00:07:59.603 "copy": true, 00:07:59.603 "nvme_iov_md": false 00:07:59.603 }, 00:07:59.603 "memory_domains": [ 00:07:59.603 { 00:07:59.603 "dma_device_id": "system", 00:07:59.603 "dma_device_type": 1 00:07:59.603 }, 00:07:59.603 { 00:07:59.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.603 "dma_device_type": 2 00:07:59.603 } 00:07:59.603 ], 00:07:59.603 "driver_specific": { 00:07:59.603 "passthru": { 00:07:59.603 "name": "Passthru0", 00:07:59.603 "base_bdev_name": "Malloc2" 00:07:59.603 } 00:07:59.603 } 00:07:59.603 } 00:07:59.603 ]' 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:59.603 ************************************ 00:07:59.603 END TEST rpc_daemon_integrity 00:07:59.603 ************************************ 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:59.603 00:07:59.603 real 0m0.423s 00:07:59.603 user 0m0.285s 00:07:59.603 sys 0m0.044s 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.603 11:20:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:59.861 11:20:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:59.861 11:20:07 rpc -- rpc/rpc.sh@84 -- # killprocess 56787 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 56787 ']' 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@958 -- # kill -0 56787 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@959 -- # uname 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56787 00:07:59.862 killing process with pid 56787 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56787' 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@973 -- # kill 56787 00:07:59.862 11:20:07 rpc -- common/autotest_common.sh@978 -- # wait 56787 00:08:02.397 ************************************ 00:08:02.397 END TEST rpc 00:08:02.397 ************************************ 00:08:02.397 00:08:02.397 real 0m5.556s 00:08:02.397 user 0m6.365s 00:08:02.397 sys 0m0.875s 00:08:02.397 11:20:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.397 11:20:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.397 11:20:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:02.397 11:20:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.397 11:20:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.397 11:20:09 -- common/autotest_common.sh@10 -- # set +x 00:08:02.397 ************************************ 00:08:02.397 START TEST skip_rpc 00:08:02.397 ************************************ 00:08:02.397 11:20:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:02.397 * Looking for test storage... 00:08:02.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:02.397 11:20:09 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.397 11:20:10 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.397 11:20:10 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.397 11:20:10 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.397 11:20:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.398 11:20:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.398 --rc genhtml_branch_coverage=1 00:08:02.398 --rc genhtml_function_coverage=1 00:08:02.398 --rc genhtml_legend=1 00:08:02.398 --rc geninfo_all_blocks=1 00:08:02.398 --rc geninfo_unexecuted_blocks=1 00:08:02.398 00:08:02.398 ' 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.398 --rc genhtml_branch_coverage=1 00:08:02.398 --rc genhtml_function_coverage=1 00:08:02.398 --rc genhtml_legend=1 00:08:02.398 --rc geninfo_all_blocks=1 00:08:02.398 --rc geninfo_unexecuted_blocks=1 00:08:02.398 00:08:02.398 ' 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.398 --rc genhtml_branch_coverage=1 00:08:02.398 --rc genhtml_function_coverage=1 00:08:02.398 --rc genhtml_legend=1 00:08:02.398 --rc geninfo_all_blocks=1 00:08:02.398 --rc geninfo_unexecuted_blocks=1 00:08:02.398 00:08:02.398 ' 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.398 --rc genhtml_branch_coverage=1 00:08:02.398 --rc genhtml_function_coverage=1 00:08:02.398 --rc genhtml_legend=1 00:08:02.398 --rc geninfo_all_blocks=1 00:08:02.398 --rc geninfo_unexecuted_blocks=1 00:08:02.398 00:08:02.398 ' 00:08:02.398 11:20:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:02.398 11:20:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:02.398 11:20:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.398 11:20:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.398 ************************************ 00:08:02.398 START TEST skip_rpc 00:08:02.398 ************************************ 00:08:02.398 11:20:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:02.398 11:20:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57021 00:08:02.398 11:20:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:02.398 11:20:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:02.398 11:20:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:02.657 [2024-11-20 11:20:10.262270] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:02.657 [2024-11-20 11:20:10.262669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57021 ] 00:08:02.657 [2024-11-20 11:20:10.453560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.916 [2024-11-20 11:20:10.621577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57021 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57021 ']' 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57021 00:08:08.187 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:08.188 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.188 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57021 00:08:08.188 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.188 killing process with pid 57021 00:08:08.188 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.188 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57021' 00:08:08.188 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57021 00:08:08.188 11:20:15 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57021 00:08:10.089 00:08:10.089 real 0m7.309s 00:08:10.089 user 0m6.735s 00:08:10.089 sys 0m0.466s 00:08:10.089 11:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.089 ************************************ 00:08:10.089 END TEST skip_rpc 00:08:10.089 ************************************ 00:08:10.089 11:20:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 11:20:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:10.089 11:20:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.089 11:20:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.089 11:20:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 ************************************ 00:08:10.089 START TEST skip_rpc_with_json 00:08:10.089 ************************************ 00:08:10.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57125 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57125 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57125 ']' 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.089 11:20:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.089 [2024-11-20 11:20:17.600212] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:10.089 [2024-11-20 11:20:17.600373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57125 ] 00:08:10.089 [2024-11-20 11:20:17.775836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.089 [2024-11-20 11:20:17.916817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:11.027 [2024-11-20 11:20:18.822471] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:11.027 request: 00:08:11.027 { 00:08:11.027 "trtype": "tcp", 00:08:11.027 "method": "nvmf_get_transports", 00:08:11.027 "req_id": 1 00:08:11.027 } 00:08:11.027 Got JSON-RPC error response 00:08:11.027 response: 00:08:11.027 { 00:08:11.027 "code": -19, 00:08:11.027 "message": "No such device" 00:08:11.027 } 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:11.027 [2024-11-20 11:20:18.834645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.027 11:20:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:11.286 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.286 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:11.286 { 00:08:11.286 "subsystems": [ 00:08:11.286 { 00:08:11.286 "subsystem": "fsdev", 00:08:11.286 "config": [ 00:08:11.286 { 00:08:11.286 "method": "fsdev_set_opts", 00:08:11.286 "params": { 00:08:11.286 "fsdev_io_pool_size": 65535, 00:08:11.286 "fsdev_io_cache_size": 256 00:08:11.286 } 00:08:11.286 } 00:08:11.286 ] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "keyring", 00:08:11.286 "config": [] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "iobuf", 00:08:11.286 "config": [ 00:08:11.286 { 00:08:11.286 "method": "iobuf_set_options", 00:08:11.286 "params": { 00:08:11.286 "small_pool_count": 8192, 00:08:11.286 "large_pool_count": 1024, 00:08:11.286 "small_bufsize": 8192, 00:08:11.286 "large_bufsize": 135168, 00:08:11.286 "enable_numa": false 00:08:11.286 } 00:08:11.286 } 00:08:11.286 ] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "sock", 00:08:11.286 "config": [ 00:08:11.286 { 00:08:11.286 "method": "sock_set_default_impl", 00:08:11.286 "params": { 00:08:11.286 "impl_name": "posix" 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "sock_impl_set_options", 00:08:11.286 "params": { 00:08:11.286 "impl_name": "ssl", 00:08:11.286 "recv_buf_size": 4096, 00:08:11.286 "send_buf_size": 4096, 00:08:11.286 "enable_recv_pipe": true, 00:08:11.286 "enable_quickack": false, 00:08:11.286 "enable_placement_id": 0, 00:08:11.286 "enable_zerocopy_send_server": true, 00:08:11.286 "enable_zerocopy_send_client": false, 00:08:11.286 "zerocopy_threshold": 0, 00:08:11.286 "tls_version": 0, 00:08:11.286 "enable_ktls": false 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "sock_impl_set_options", 00:08:11.286 "params": { 00:08:11.286 "impl_name": "posix", 00:08:11.286 "recv_buf_size": 2097152, 00:08:11.286 "send_buf_size": 2097152, 00:08:11.286 "enable_recv_pipe": true, 00:08:11.286 "enable_quickack": false, 00:08:11.286 "enable_placement_id": 0, 00:08:11.286 "enable_zerocopy_send_server": true, 00:08:11.286 "enable_zerocopy_send_client": false, 00:08:11.286 "zerocopy_threshold": 0, 00:08:11.286 "tls_version": 0, 00:08:11.286 "enable_ktls": false 00:08:11.286 } 00:08:11.286 } 00:08:11.286 ] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "vmd", 00:08:11.286 "config": [] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "accel", 00:08:11.286 "config": [ 00:08:11.286 { 00:08:11.286 "method": "accel_set_options", 00:08:11.286 "params": { 00:08:11.286 "small_cache_size": 128, 00:08:11.286 "large_cache_size": 16, 00:08:11.286 "task_count": 2048, 00:08:11.286 "sequence_count": 2048, 00:08:11.286 "buf_count": 2048 00:08:11.286 } 00:08:11.286 } 00:08:11.286 ] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "bdev", 00:08:11.286 "config": [ 00:08:11.286 { 00:08:11.286 "method": "bdev_set_options", 00:08:11.286 "params": { 00:08:11.286 "bdev_io_pool_size": 65535, 00:08:11.286 "bdev_io_cache_size": 256, 00:08:11.286 "bdev_auto_examine": true, 00:08:11.286 "iobuf_small_cache_size": 128, 00:08:11.286 "iobuf_large_cache_size": 16 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "bdev_raid_set_options", 00:08:11.286 "params": { 00:08:11.286 "process_window_size_kb": 1024, 00:08:11.286 "process_max_bandwidth_mb_sec": 0 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "bdev_iscsi_set_options", 00:08:11.286 "params": { 00:08:11.286 "timeout_sec": 30 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "bdev_nvme_set_options", 00:08:11.286 "params": { 00:08:11.286 "action_on_timeout": "none", 00:08:11.286 "timeout_us": 0, 00:08:11.286 "timeout_admin_us": 0, 00:08:11.286 "keep_alive_timeout_ms": 10000, 00:08:11.286 "arbitration_burst": 0, 00:08:11.286 "low_priority_weight": 0, 00:08:11.286 "medium_priority_weight": 0, 00:08:11.286 "high_priority_weight": 0, 00:08:11.286 "nvme_adminq_poll_period_us": 10000, 00:08:11.286 "nvme_ioq_poll_period_us": 0, 00:08:11.286 "io_queue_requests": 0, 00:08:11.286 "delay_cmd_submit": true, 00:08:11.286 "transport_retry_count": 4, 00:08:11.286 "bdev_retry_count": 3, 00:08:11.286 "transport_ack_timeout": 0, 00:08:11.286 "ctrlr_loss_timeout_sec": 0, 00:08:11.286 "reconnect_delay_sec": 0, 00:08:11.286 "fast_io_fail_timeout_sec": 0, 00:08:11.286 "disable_auto_failback": false, 00:08:11.286 "generate_uuids": false, 00:08:11.286 "transport_tos": 0, 00:08:11.286 "nvme_error_stat": false, 00:08:11.286 "rdma_srq_size": 0, 00:08:11.286 "io_path_stat": false, 00:08:11.286 "allow_accel_sequence": false, 00:08:11.286 "rdma_max_cq_size": 0, 00:08:11.286 "rdma_cm_event_timeout_ms": 0, 00:08:11.286 "dhchap_digests": [ 00:08:11.286 "sha256", 00:08:11.286 "sha384", 00:08:11.286 "sha512" 00:08:11.286 ], 00:08:11.286 "dhchap_dhgroups": [ 00:08:11.286 "null", 00:08:11.286 "ffdhe2048", 00:08:11.286 "ffdhe3072", 00:08:11.286 "ffdhe4096", 00:08:11.286 "ffdhe6144", 00:08:11.286 "ffdhe8192" 00:08:11.286 ] 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "bdev_nvme_set_hotplug", 00:08:11.286 "params": { 00:08:11.286 "period_us": 100000, 00:08:11.286 "enable": false 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "bdev_wait_for_examine" 00:08:11.286 } 00:08:11.286 ] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "scsi", 00:08:11.286 "config": null 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "scheduler", 00:08:11.286 "config": [ 00:08:11.286 { 00:08:11.286 "method": "framework_set_scheduler", 00:08:11.286 "params": { 00:08:11.286 "name": "static" 00:08:11.286 } 00:08:11.286 } 00:08:11.286 ] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "vhost_scsi", 00:08:11.286 "config": [] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "vhost_blk", 00:08:11.286 "config": [] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "ublk", 00:08:11.286 "config": [] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "nbd", 00:08:11.286 "config": [] 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "subsystem": "nvmf", 00:08:11.286 "config": [ 00:08:11.286 { 00:08:11.286 "method": "nvmf_set_config", 00:08:11.286 "params": { 00:08:11.286 "discovery_filter": "match_any", 00:08:11.286 "admin_cmd_passthru": { 00:08:11.286 "identify_ctrlr": false 00:08:11.286 }, 00:08:11.286 "dhchap_digests": [ 00:08:11.286 "sha256", 00:08:11.286 "sha384", 00:08:11.286 "sha512" 00:08:11.286 ], 00:08:11.286 "dhchap_dhgroups": [ 00:08:11.286 "null", 00:08:11.286 "ffdhe2048", 00:08:11.286 "ffdhe3072", 00:08:11.286 "ffdhe4096", 00:08:11.286 "ffdhe6144", 00:08:11.286 "ffdhe8192" 00:08:11.286 ] 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "nvmf_set_max_subsystems", 00:08:11.286 "params": { 00:08:11.286 "max_subsystems": 1024 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "nvmf_set_crdt", 00:08:11.286 "params": { 00:08:11.286 "crdt1": 0, 00:08:11.286 "crdt2": 0, 00:08:11.286 "crdt3": 0 00:08:11.286 } 00:08:11.286 }, 00:08:11.286 { 00:08:11.286 "method": "nvmf_create_transport", 00:08:11.286 "params": { 00:08:11.286 "trtype": "TCP", 00:08:11.286 "max_queue_depth": 128, 00:08:11.286 "max_io_qpairs_per_ctrlr": 127, 00:08:11.286 "in_capsule_data_size": 4096, 00:08:11.286 "max_io_size": 131072, 00:08:11.286 "io_unit_size": 131072, 00:08:11.286 "max_aq_depth": 128, 00:08:11.286 "num_shared_buffers": 511, 00:08:11.286 "buf_cache_size": 4294967295, 00:08:11.286 "dif_insert_or_strip": false, 00:08:11.286 "zcopy": false, 00:08:11.286 "c2h_success": true, 00:08:11.286 "sock_priority": 0, 00:08:11.287 "abort_timeout_sec": 1, 00:08:11.287 "ack_timeout": 0, 00:08:11.287 "data_wr_pool_size": 0 00:08:11.287 } 00:08:11.287 } 00:08:11.287 ] 00:08:11.287 }, 00:08:11.287 { 00:08:11.287 "subsystem": "iscsi", 00:08:11.287 "config": [ 00:08:11.287 { 00:08:11.287 "method": "iscsi_set_options", 00:08:11.287 "params": { 00:08:11.287 "node_base": "iqn.2016-06.io.spdk", 00:08:11.287 "max_sessions": 128, 00:08:11.287 "max_connections_per_session": 2, 00:08:11.287 "max_queue_depth": 64, 00:08:11.287 "default_time2wait": 2, 00:08:11.287 "default_time2retain": 20, 00:08:11.287 "first_burst_length": 8192, 00:08:11.287 "immediate_data": true, 00:08:11.287 "allow_duplicated_isid": false, 00:08:11.287 "error_recovery_level": 0, 00:08:11.287 "nop_timeout": 60, 00:08:11.287 "nop_in_interval": 30, 00:08:11.287 "disable_chap": false, 00:08:11.287 "require_chap": false, 00:08:11.287 "mutual_chap": false, 00:08:11.287 "chap_group": 0, 00:08:11.287 "max_large_datain_per_connection": 64, 00:08:11.287 "max_r2t_per_connection": 4, 00:08:11.287 "pdu_pool_size": 36864, 00:08:11.287 "immediate_data_pool_size": 16384, 00:08:11.287 "data_out_pool_size": 2048 00:08:11.287 } 00:08:11.287 } 00:08:11.287 ] 00:08:11.287 } 00:08:11.287 ] 00:08:11.287 } 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57125 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57125 ']' 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57125 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57125 00:08:11.287 killing process with pid 57125 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57125' 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57125 00:08:11.287 11:20:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57125 00:08:13.819 11:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57176 00:08:13.819 11:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:13.819 11:20:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57176 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57176 ']' 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57176 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57176 00:08:19.087 killing process with pid 57176 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57176' 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57176 00:08:19.087 11:20:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57176 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:20.990 ************************************ 00:08:20.990 END TEST skip_rpc_with_json 00:08:20.990 ************************************ 00:08:20.990 00:08:20.990 real 0m11.066s 00:08:20.990 user 0m10.500s 00:08:20.990 sys 0m1.018s 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:20.990 11:20:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:20.990 11:20:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.990 11:20:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.990 11:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.990 ************************************ 00:08:20.990 START TEST skip_rpc_with_delay 00:08:20.990 ************************************ 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:20.990 [2024-11-20 11:20:28.721209] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.990 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.990 00:08:20.990 real 0m0.169s 00:08:20.991 user 0m0.091s 00:08:20.991 sys 0m0.077s 00:08:20.991 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.991 11:20:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:20.991 ************************************ 00:08:20.991 END TEST skip_rpc_with_delay 00:08:20.991 ************************************ 00:08:20.991 11:20:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:20.991 11:20:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:20.991 11:20:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:20.991 11:20:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.991 11:20:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.991 11:20:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.991 ************************************ 00:08:20.991 START TEST exit_on_failed_rpc_init 00:08:20.991 ************************************ 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57309 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57309 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57309 ']' 00:08:20.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.991 11:20:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:21.250 [2024-11-20 11:20:28.964297] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:21.250 [2024-11-20 11:20:28.964481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57309 ] 00:08:21.509 [2024-11-20 11:20:29.154053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.509 [2024-11-20 11:20:29.330308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:22.446 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:22.705 [2024-11-20 11:20:30.337663] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:22.705 [2024-11-20 11:20:30.338073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57333 ] 00:08:22.705 [2024-11-20 11:20:30.528393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.963 [2024-11-20 11:20:30.684282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.963 [2024-11-20 11:20:30.684423] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:22.963 [2024-11-20 11:20:30.684451] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:22.963 [2024-11-20 11:20:30.684475] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57309 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57309 ']' 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57309 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.222 11:20:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57309 00:08:23.222 11:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.222 11:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.222 11:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57309' 00:08:23.222 killing process with pid 57309 00:08:23.222 11:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57309 00:08:23.222 11:20:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57309 00:08:25.752 00:08:25.752 real 0m4.397s 00:08:25.752 user 0m4.903s 00:08:25.752 sys 0m0.689s 00:08:25.752 11:20:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.752 11:20:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 ************************************ 00:08:25.752 END TEST exit_on_failed_rpc_init 00:08:25.752 ************************************ 00:08:25.752 11:20:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:25.752 ************************************ 00:08:25.752 END TEST skip_rpc 00:08:25.752 ************************************ 00:08:25.752 00:08:25.752 real 0m23.340s 00:08:25.752 user 0m22.411s 00:08:25.752 sys 0m2.462s 00:08:25.752 11:20:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.752 11:20:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 11:20:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:25.752 11:20:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.752 11:20:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.752 11:20:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 ************************************ 00:08:25.752 START TEST rpc_client 00:08:25.752 ************************************ 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:25.752 * Looking for test storage... 00:08:25.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.752 11:20:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.752 --rc genhtml_branch_coverage=1 00:08:25.752 --rc genhtml_function_coverage=1 00:08:25.752 --rc genhtml_legend=1 00:08:25.752 --rc geninfo_all_blocks=1 00:08:25.752 --rc geninfo_unexecuted_blocks=1 00:08:25.752 00:08:25.752 ' 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.752 --rc genhtml_branch_coverage=1 00:08:25.752 --rc genhtml_function_coverage=1 00:08:25.752 --rc genhtml_legend=1 00:08:25.752 --rc geninfo_all_blocks=1 00:08:25.752 --rc geninfo_unexecuted_blocks=1 00:08:25.752 00:08:25.752 ' 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.752 --rc genhtml_branch_coverage=1 00:08:25.752 --rc genhtml_function_coverage=1 00:08:25.752 --rc genhtml_legend=1 00:08:25.752 --rc geninfo_all_blocks=1 00:08:25.752 --rc geninfo_unexecuted_blocks=1 00:08:25.752 00:08:25.752 ' 00:08:25.752 11:20:33 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.752 --rc genhtml_branch_coverage=1 00:08:25.752 --rc genhtml_function_coverage=1 00:08:25.752 --rc genhtml_legend=1 00:08:25.752 --rc geninfo_all_blocks=1 00:08:25.752 --rc geninfo_unexecuted_blocks=1 00:08:25.752 00:08:25.753 ' 00:08:25.753 11:20:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:25.753 OK 00:08:25.753 11:20:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:25.753 00:08:25.753 real 0m0.245s 00:08:25.753 user 0m0.141s 00:08:25.753 sys 0m0.112s 00:08:25.753 11:20:33 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.753 11:20:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:25.753 ************************************ 00:08:25.753 END TEST rpc_client 00:08:25.753 ************************************ 00:08:26.012 11:20:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:26.012 11:20:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.012 11:20:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.012 11:20:33 -- common/autotest_common.sh@10 -- # set +x 00:08:26.012 ************************************ 00:08:26.012 START TEST json_config 00:08:26.012 ************************************ 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.012 11:20:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.012 11:20:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.012 11:20:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.012 11:20:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.012 11:20:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.012 11:20:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.012 11:20:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.012 11:20:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:26.012 11:20:33 json_config -- scripts/common.sh@345 -- # : 1 00:08:26.012 11:20:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.012 11:20:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.012 11:20:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:26.012 11:20:33 json_config -- scripts/common.sh@353 -- # local d=1 00:08:26.012 11:20:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.012 11:20:33 json_config -- scripts/common.sh@355 -- # echo 1 00:08:26.012 11:20:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.012 11:20:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@353 -- # local d=2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.012 11:20:33 json_config -- scripts/common.sh@355 -- # echo 2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.012 11:20:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.012 11:20:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.012 11:20:33 json_config -- scripts/common.sh@368 -- # return 0 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.012 --rc genhtml_branch_coverage=1 00:08:26.012 --rc genhtml_function_coverage=1 00:08:26.012 --rc genhtml_legend=1 00:08:26.012 --rc geninfo_all_blocks=1 00:08:26.012 --rc geninfo_unexecuted_blocks=1 00:08:26.012 00:08:26.012 ' 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.012 --rc genhtml_branch_coverage=1 00:08:26.012 --rc genhtml_function_coverage=1 00:08:26.012 --rc genhtml_legend=1 00:08:26.012 --rc geninfo_all_blocks=1 00:08:26.012 --rc geninfo_unexecuted_blocks=1 00:08:26.012 00:08:26.012 ' 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.012 --rc genhtml_branch_coverage=1 00:08:26.012 --rc genhtml_function_coverage=1 00:08:26.012 --rc genhtml_legend=1 00:08:26.012 --rc geninfo_all_blocks=1 00:08:26.012 --rc geninfo_unexecuted_blocks=1 00:08:26.012 00:08:26.012 ' 00:08:26.012 11:20:33 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.012 --rc genhtml_branch_coverage=1 00:08:26.012 --rc genhtml_function_coverage=1 00:08:26.012 --rc genhtml_legend=1 00:08:26.012 --rc geninfo_all_blocks=1 00:08:26.012 --rc geninfo_unexecuted_blocks=1 00:08:26.012 00:08:26.012 ' 00:08:26.012 11:20:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:efe9994e-228a-4e9b-98c1-203c146486a7 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=efe9994e-228a-4e9b-98c1-203c146486a7 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.012 11:20:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.012 11:20:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.012 11:20:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.012 11:20:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.012 11:20:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.012 11:20:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.012 11:20:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.012 11:20:33 json_config -- paths/export.sh@5 -- # export PATH 00:08:26.012 11:20:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@51 -- # : 0 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.012 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.012 11:20:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.012 WARNING: No tests are enabled so not running JSON configuration tests 00:08:26.012 11:20:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:26.012 11:20:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:26.012 11:20:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:26.012 11:20:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:26.012 11:20:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:26.013 11:20:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:26.013 11:20:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:26.013 00:08:26.013 real 0m0.181s 00:08:26.013 user 0m0.111s 00:08:26.013 sys 0m0.074s 00:08:26.013 11:20:33 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.013 ************************************ 00:08:26.013 END TEST json_config 00:08:26.013 ************************************ 00:08:26.013 11:20:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:26.013 11:20:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:26.013 11:20:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.013 11:20:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.013 11:20:33 -- common/autotest_common.sh@10 -- # set +x 00:08:26.013 ************************************ 00:08:26.013 START TEST json_config_extra_key 00:08:26.013 ************************************ 00:08:26.013 11:20:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:26.271 11:20:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.271 11:20:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.271 11:20:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.271 11:20:34 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.271 11:20:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.272 --rc genhtml_branch_coverage=1 00:08:26.272 --rc genhtml_function_coverage=1 00:08:26.272 --rc genhtml_legend=1 00:08:26.272 --rc geninfo_all_blocks=1 00:08:26.272 --rc geninfo_unexecuted_blocks=1 00:08:26.272 00:08:26.272 ' 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.272 --rc genhtml_branch_coverage=1 00:08:26.272 --rc genhtml_function_coverage=1 00:08:26.272 --rc genhtml_legend=1 00:08:26.272 --rc geninfo_all_blocks=1 00:08:26.272 --rc geninfo_unexecuted_blocks=1 00:08:26.272 00:08:26.272 ' 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.272 --rc genhtml_branch_coverage=1 00:08:26.272 --rc genhtml_function_coverage=1 00:08:26.272 --rc genhtml_legend=1 00:08:26.272 --rc geninfo_all_blocks=1 00:08:26.272 --rc geninfo_unexecuted_blocks=1 00:08:26.272 00:08:26.272 ' 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.272 --rc genhtml_branch_coverage=1 00:08:26.272 --rc genhtml_function_coverage=1 00:08:26.272 --rc genhtml_legend=1 00:08:26.272 --rc geninfo_all_blocks=1 00:08:26.272 --rc geninfo_unexecuted_blocks=1 00:08:26.272 00:08:26.272 ' 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:efe9994e-228a-4e9b-98c1-203c146486a7 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=efe9994e-228a-4e9b-98c1-203c146486a7 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.272 11:20:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.272 11:20:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.272 11:20:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.272 11:20:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.272 11:20:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:26.272 11:20:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.272 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.272 11:20:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:26.272 INFO: launching applications... 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:26.272 11:20:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57538 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:26.272 Waiting for target to run... 00:08:26.272 11:20:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57538 /var/tmp/spdk_tgt.sock 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57538 ']' 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.272 11:20:34 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:26.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:26.273 11:20:34 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.273 11:20:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:26.531 [2024-11-20 11:20:34.188392] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:26.531 [2024-11-20 11:20:34.188826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57538 ] 00:08:27.097 [2024-11-20 11:20:34.668727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.097 [2024-11-20 11:20:34.809969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.030 11:20:35 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.030 00:08:28.030 INFO: shutting down applications... 00:08:28.030 11:20:35 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:28.030 11:20:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:28.030 11:20:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57538 ]] 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57538 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57538 00:08:28.030 11:20:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:28.288 11:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:28.288 11:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:28.288 11:20:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57538 00:08:28.288 11:20:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:28.856 11:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:28.856 11:20:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:28.856 11:20:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57538 00:08:28.856 11:20:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.478 11:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.478 11:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.478 11:20:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57538 00:08:29.478 11:20:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.743 11:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.743 11:20:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.743 11:20:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57538 00:08:29.743 11:20:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:30.311 11:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:30.311 11:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.311 11:20:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57538 00:08:30.311 11:20:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:30.879 11:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:30.879 11:20:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.879 11:20:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57538 00:08:30.879 11:20:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:30.879 11:20:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:30.879 11:20:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:30.879 SPDK target shutdown done 00:08:30.879 11:20:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:30.879 Success 00:08:30.879 11:20:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:30.879 00:08:30.879 real 0m4.719s 00:08:30.879 user 0m4.128s 00:08:30.879 sys 0m0.653s 00:08:30.879 ************************************ 00:08:30.879 END TEST json_config_extra_key 00:08:30.879 ************************************ 00:08:30.879 11:20:38 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.879 11:20:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:30.879 11:20:38 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:30.879 11:20:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.879 11:20:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.879 11:20:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.879 ************************************ 00:08:30.879 START TEST alias_rpc 00:08:30.879 ************************************ 00:08:30.879 11:20:38 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:30.879 * Looking for test storage... 00:08:30.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:30.879 11:20:38 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.879 11:20:38 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.879 11:20:38 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.138 11:20:38 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.138 --rc genhtml_branch_coverage=1 00:08:31.138 --rc genhtml_function_coverage=1 00:08:31.138 --rc genhtml_legend=1 00:08:31.138 --rc geninfo_all_blocks=1 00:08:31.138 --rc geninfo_unexecuted_blocks=1 00:08:31.138 00:08:31.138 ' 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.138 --rc genhtml_branch_coverage=1 00:08:31.138 --rc genhtml_function_coverage=1 00:08:31.138 --rc genhtml_legend=1 00:08:31.138 --rc geninfo_all_blocks=1 00:08:31.138 --rc geninfo_unexecuted_blocks=1 00:08:31.138 00:08:31.138 ' 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.138 --rc genhtml_branch_coverage=1 00:08:31.138 --rc genhtml_function_coverage=1 00:08:31.138 --rc genhtml_legend=1 00:08:31.138 --rc geninfo_all_blocks=1 00:08:31.138 --rc geninfo_unexecuted_blocks=1 00:08:31.138 00:08:31.138 ' 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.138 --rc genhtml_branch_coverage=1 00:08:31.138 --rc genhtml_function_coverage=1 00:08:31.138 --rc genhtml_legend=1 00:08:31.138 --rc geninfo_all_blocks=1 00:08:31.138 --rc geninfo_unexecuted_blocks=1 00:08:31.138 00:08:31.138 ' 00:08:31.138 11:20:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:31.138 11:20:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57649 00:08:31.138 11:20:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:31.138 11:20:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57649 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57649 ']' 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.138 11:20:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.397 [2024-11-20 11:20:38.991369] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:31.397 [2024-11-20 11:20:38.991838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57649 ] 00:08:31.397 [2024-11-20 11:20:39.182401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.657 [2024-11-20 11:20:39.348610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.592 11:20:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.592 11:20:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:32.592 11:20:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:32.850 11:20:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57649 00:08:32.850 11:20:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57649 ']' 00:08:32.850 11:20:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57649 00:08:32.850 11:20:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:32.850 11:20:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.850 11:20:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57649 00:08:33.109 killing process with pid 57649 00:08:33.109 11:20:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.109 11:20:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.109 11:20:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57649' 00:08:33.109 11:20:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 57649 00:08:33.109 11:20:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 57649 00:08:35.642 ************************************ 00:08:35.642 END TEST alias_rpc 00:08:35.642 ************************************ 00:08:35.642 00:08:35.643 real 0m4.315s 00:08:35.643 user 0m4.530s 00:08:35.643 sys 0m0.678s 00:08:35.643 11:20:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.643 11:20:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.643 11:20:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:35.643 11:20:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:35.643 11:20:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.643 11:20:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.643 11:20:42 -- common/autotest_common.sh@10 -- # set +x 00:08:35.643 ************************************ 00:08:35.643 START TEST spdkcli_tcp 00:08:35.643 ************************************ 00:08:35.643 11:20:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:35.643 * Looking for test storage... 00:08:35.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.643 11:20:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.643 --rc genhtml_branch_coverage=1 00:08:35.643 --rc genhtml_function_coverage=1 00:08:35.643 --rc genhtml_legend=1 00:08:35.643 --rc geninfo_all_blocks=1 00:08:35.643 --rc geninfo_unexecuted_blocks=1 00:08:35.643 00:08:35.643 ' 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.643 --rc genhtml_branch_coverage=1 00:08:35.643 --rc genhtml_function_coverage=1 00:08:35.643 --rc genhtml_legend=1 00:08:35.643 --rc geninfo_all_blocks=1 00:08:35.643 --rc geninfo_unexecuted_blocks=1 00:08:35.643 00:08:35.643 ' 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.643 --rc genhtml_branch_coverage=1 00:08:35.643 --rc genhtml_function_coverage=1 00:08:35.643 --rc genhtml_legend=1 00:08:35.643 --rc geninfo_all_blocks=1 00:08:35.643 --rc geninfo_unexecuted_blocks=1 00:08:35.643 00:08:35.643 ' 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.643 --rc genhtml_branch_coverage=1 00:08:35.643 --rc genhtml_function_coverage=1 00:08:35.643 --rc genhtml_legend=1 00:08:35.643 --rc geninfo_all_blocks=1 00:08:35.643 --rc geninfo_unexecuted_blocks=1 00:08:35.643 00:08:35.643 ' 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57756 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57756 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57756 ']' 00:08:35.643 11:20:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.643 11:20:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.643 [2024-11-20 11:20:43.317651] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:35.643 [2024-11-20 11:20:43.318079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57756 ] 00:08:35.901 [2024-11-20 11:20:43.507748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:35.901 [2024-11-20 11:20:43.639055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.901 [2024-11-20 11:20:43.639094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.837 11:20:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.837 11:20:44 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:36.837 11:20:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57784 00:08:36.837 11:20:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:36.837 11:20:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:37.096 [ 00:08:37.097 "bdev_malloc_delete", 00:08:37.097 "bdev_malloc_create", 00:08:37.097 "bdev_null_resize", 00:08:37.097 "bdev_null_delete", 00:08:37.097 "bdev_null_create", 00:08:37.097 "bdev_nvme_cuse_unregister", 00:08:37.097 "bdev_nvme_cuse_register", 00:08:37.097 "bdev_opal_new_user", 00:08:37.097 "bdev_opal_set_lock_state", 00:08:37.097 "bdev_opal_delete", 00:08:37.097 "bdev_opal_get_info", 00:08:37.097 "bdev_opal_create", 00:08:37.097 "bdev_nvme_opal_revert", 00:08:37.097 "bdev_nvme_opal_init", 00:08:37.097 "bdev_nvme_send_cmd", 00:08:37.097 "bdev_nvme_set_keys", 00:08:37.097 "bdev_nvme_get_path_iostat", 00:08:37.097 "bdev_nvme_get_mdns_discovery_info", 00:08:37.097 "bdev_nvme_stop_mdns_discovery", 00:08:37.097 "bdev_nvme_start_mdns_discovery", 00:08:37.097 "bdev_nvme_set_multipath_policy", 00:08:37.097 "bdev_nvme_set_preferred_path", 00:08:37.097 "bdev_nvme_get_io_paths", 00:08:37.097 "bdev_nvme_remove_error_injection", 00:08:37.097 "bdev_nvme_add_error_injection", 00:08:37.097 "bdev_nvme_get_discovery_info", 00:08:37.097 "bdev_nvme_stop_discovery", 00:08:37.097 "bdev_nvme_start_discovery", 00:08:37.097 "bdev_nvme_get_controller_health_info", 00:08:37.097 "bdev_nvme_disable_controller", 00:08:37.097 "bdev_nvme_enable_controller", 00:08:37.097 "bdev_nvme_reset_controller", 00:08:37.097 "bdev_nvme_get_transport_statistics", 00:08:37.097 "bdev_nvme_apply_firmware", 00:08:37.097 "bdev_nvme_detach_controller", 00:08:37.097 "bdev_nvme_get_controllers", 00:08:37.097 "bdev_nvme_attach_controller", 00:08:37.097 "bdev_nvme_set_hotplug", 00:08:37.097 "bdev_nvme_set_options", 00:08:37.097 "bdev_passthru_delete", 00:08:37.097 "bdev_passthru_create", 00:08:37.097 "bdev_lvol_set_parent_bdev", 00:08:37.097 "bdev_lvol_set_parent", 00:08:37.097 "bdev_lvol_check_shallow_copy", 00:08:37.097 "bdev_lvol_start_shallow_copy", 00:08:37.097 "bdev_lvol_grow_lvstore", 00:08:37.097 "bdev_lvol_get_lvols", 00:08:37.097 "bdev_lvol_get_lvstores", 00:08:37.097 "bdev_lvol_delete", 00:08:37.097 "bdev_lvol_set_read_only", 00:08:37.097 "bdev_lvol_resize", 00:08:37.097 "bdev_lvol_decouple_parent", 00:08:37.097 "bdev_lvol_inflate", 00:08:37.097 "bdev_lvol_rename", 00:08:37.097 "bdev_lvol_clone_bdev", 00:08:37.097 "bdev_lvol_clone", 00:08:37.097 "bdev_lvol_snapshot", 00:08:37.097 "bdev_lvol_create", 00:08:37.097 "bdev_lvol_delete_lvstore", 00:08:37.097 "bdev_lvol_rename_lvstore", 00:08:37.097 "bdev_lvol_create_lvstore", 00:08:37.097 "bdev_raid_set_options", 00:08:37.097 "bdev_raid_remove_base_bdev", 00:08:37.097 "bdev_raid_add_base_bdev", 00:08:37.097 "bdev_raid_delete", 00:08:37.097 "bdev_raid_create", 00:08:37.097 "bdev_raid_get_bdevs", 00:08:37.097 "bdev_error_inject_error", 00:08:37.097 "bdev_error_delete", 00:08:37.097 "bdev_error_create", 00:08:37.097 "bdev_split_delete", 00:08:37.097 "bdev_split_create", 00:08:37.097 "bdev_delay_delete", 00:08:37.097 "bdev_delay_create", 00:08:37.097 "bdev_delay_update_latency", 00:08:37.097 "bdev_zone_block_delete", 00:08:37.097 "bdev_zone_block_create", 00:08:37.097 "blobfs_create", 00:08:37.097 "blobfs_detect", 00:08:37.097 "blobfs_set_cache_size", 00:08:37.097 "bdev_aio_delete", 00:08:37.097 "bdev_aio_rescan", 00:08:37.097 "bdev_aio_create", 00:08:37.097 "bdev_ftl_set_property", 00:08:37.097 "bdev_ftl_get_properties", 00:08:37.097 "bdev_ftl_get_stats", 00:08:37.097 "bdev_ftl_unmap", 00:08:37.097 "bdev_ftl_unload", 00:08:37.097 "bdev_ftl_delete", 00:08:37.097 "bdev_ftl_load", 00:08:37.097 "bdev_ftl_create", 00:08:37.097 "bdev_virtio_attach_controller", 00:08:37.097 "bdev_virtio_scsi_get_devices", 00:08:37.097 "bdev_virtio_detach_controller", 00:08:37.097 "bdev_virtio_blk_set_hotplug", 00:08:37.097 "bdev_iscsi_delete", 00:08:37.097 "bdev_iscsi_create", 00:08:37.097 "bdev_iscsi_set_options", 00:08:37.097 "accel_error_inject_error", 00:08:37.097 "ioat_scan_accel_module", 00:08:37.097 "dsa_scan_accel_module", 00:08:37.097 "iaa_scan_accel_module", 00:08:37.097 "keyring_file_remove_key", 00:08:37.097 "keyring_file_add_key", 00:08:37.097 "keyring_linux_set_options", 00:08:37.097 "fsdev_aio_delete", 00:08:37.097 "fsdev_aio_create", 00:08:37.097 "iscsi_get_histogram", 00:08:37.097 "iscsi_enable_histogram", 00:08:37.097 "iscsi_set_options", 00:08:37.097 "iscsi_get_auth_groups", 00:08:37.097 "iscsi_auth_group_remove_secret", 00:08:37.097 "iscsi_auth_group_add_secret", 00:08:37.097 "iscsi_delete_auth_group", 00:08:37.097 "iscsi_create_auth_group", 00:08:37.097 "iscsi_set_discovery_auth", 00:08:37.097 "iscsi_get_options", 00:08:37.097 "iscsi_target_node_request_logout", 00:08:37.097 "iscsi_target_node_set_redirect", 00:08:37.097 "iscsi_target_node_set_auth", 00:08:37.097 "iscsi_target_node_add_lun", 00:08:37.097 "iscsi_get_stats", 00:08:37.097 "iscsi_get_connections", 00:08:37.097 "iscsi_portal_group_set_auth", 00:08:37.097 "iscsi_start_portal_group", 00:08:37.097 "iscsi_delete_portal_group", 00:08:37.097 "iscsi_create_portal_group", 00:08:37.097 "iscsi_get_portal_groups", 00:08:37.097 "iscsi_delete_target_node", 00:08:37.097 "iscsi_target_node_remove_pg_ig_maps", 00:08:37.097 "iscsi_target_node_add_pg_ig_maps", 00:08:37.097 "iscsi_create_target_node", 00:08:37.097 "iscsi_get_target_nodes", 00:08:37.097 "iscsi_delete_initiator_group", 00:08:37.097 "iscsi_initiator_group_remove_initiators", 00:08:37.097 "iscsi_initiator_group_add_initiators", 00:08:37.097 "iscsi_create_initiator_group", 00:08:37.097 "iscsi_get_initiator_groups", 00:08:37.097 "nvmf_set_crdt", 00:08:37.097 "nvmf_set_config", 00:08:37.097 "nvmf_set_max_subsystems", 00:08:37.097 "nvmf_stop_mdns_prr", 00:08:37.097 "nvmf_publish_mdns_prr", 00:08:37.097 "nvmf_subsystem_get_listeners", 00:08:37.097 "nvmf_subsystem_get_qpairs", 00:08:37.097 "nvmf_subsystem_get_controllers", 00:08:37.097 "nvmf_get_stats", 00:08:37.097 "nvmf_get_transports", 00:08:37.097 "nvmf_create_transport", 00:08:37.097 "nvmf_get_targets", 00:08:37.097 "nvmf_delete_target", 00:08:37.097 "nvmf_create_target", 00:08:37.097 "nvmf_subsystem_allow_any_host", 00:08:37.097 "nvmf_subsystem_set_keys", 00:08:37.097 "nvmf_subsystem_remove_host", 00:08:37.097 "nvmf_subsystem_add_host", 00:08:37.097 "nvmf_ns_remove_host", 00:08:37.097 "nvmf_ns_add_host", 00:08:37.097 "nvmf_subsystem_remove_ns", 00:08:37.097 "nvmf_subsystem_set_ns_ana_group", 00:08:37.097 "nvmf_subsystem_add_ns", 00:08:37.097 "nvmf_subsystem_listener_set_ana_state", 00:08:37.097 "nvmf_discovery_get_referrals", 00:08:37.097 "nvmf_discovery_remove_referral", 00:08:37.097 "nvmf_discovery_add_referral", 00:08:37.097 "nvmf_subsystem_remove_listener", 00:08:37.097 "nvmf_subsystem_add_listener", 00:08:37.097 "nvmf_delete_subsystem", 00:08:37.097 "nvmf_create_subsystem", 00:08:37.097 "nvmf_get_subsystems", 00:08:37.097 "env_dpdk_get_mem_stats", 00:08:37.097 "nbd_get_disks", 00:08:37.097 "nbd_stop_disk", 00:08:37.097 "nbd_start_disk", 00:08:37.097 "ublk_recover_disk", 00:08:37.097 "ublk_get_disks", 00:08:37.097 "ublk_stop_disk", 00:08:37.097 "ublk_start_disk", 00:08:37.097 "ublk_destroy_target", 00:08:37.097 "ublk_create_target", 00:08:37.097 "virtio_blk_create_transport", 00:08:37.097 "virtio_blk_get_transports", 00:08:37.097 "vhost_controller_set_coalescing", 00:08:37.097 "vhost_get_controllers", 00:08:37.097 "vhost_delete_controller", 00:08:37.097 "vhost_create_blk_controller", 00:08:37.097 "vhost_scsi_controller_remove_target", 00:08:37.097 "vhost_scsi_controller_add_target", 00:08:37.097 "vhost_start_scsi_controller", 00:08:37.097 "vhost_create_scsi_controller", 00:08:37.097 "thread_set_cpumask", 00:08:37.097 "scheduler_set_options", 00:08:37.097 "framework_get_governor", 00:08:37.097 "framework_get_scheduler", 00:08:37.097 "framework_set_scheduler", 00:08:37.097 "framework_get_reactors", 00:08:37.097 "thread_get_io_channels", 00:08:37.097 "thread_get_pollers", 00:08:37.097 "thread_get_stats", 00:08:37.097 "framework_monitor_context_switch", 00:08:37.097 "spdk_kill_instance", 00:08:37.097 "log_enable_timestamps", 00:08:37.097 "log_get_flags", 00:08:37.097 "log_clear_flag", 00:08:37.097 "log_set_flag", 00:08:37.097 "log_get_level", 00:08:37.097 "log_set_level", 00:08:37.097 "log_get_print_level", 00:08:37.097 "log_set_print_level", 00:08:37.097 "framework_enable_cpumask_locks", 00:08:37.097 "framework_disable_cpumask_locks", 00:08:37.097 "framework_wait_init", 00:08:37.097 "framework_start_init", 00:08:37.097 "scsi_get_devices", 00:08:37.097 "bdev_get_histogram", 00:08:37.097 "bdev_enable_histogram", 00:08:37.097 "bdev_set_qos_limit", 00:08:37.097 "bdev_set_qd_sampling_period", 00:08:37.097 "bdev_get_bdevs", 00:08:37.097 "bdev_reset_iostat", 00:08:37.097 "bdev_get_iostat", 00:08:37.097 "bdev_examine", 00:08:37.097 "bdev_wait_for_examine", 00:08:37.097 "bdev_set_options", 00:08:37.097 "accel_get_stats", 00:08:37.097 "accel_set_options", 00:08:37.097 "accel_set_driver", 00:08:37.097 "accel_crypto_key_destroy", 00:08:37.097 "accel_crypto_keys_get", 00:08:37.097 "accel_crypto_key_create", 00:08:37.097 "accel_assign_opc", 00:08:37.097 "accel_get_module_info", 00:08:37.097 "accel_get_opc_assignments", 00:08:37.097 "vmd_rescan", 00:08:37.098 "vmd_remove_device", 00:08:37.098 "vmd_enable", 00:08:37.098 "sock_get_default_impl", 00:08:37.098 "sock_set_default_impl", 00:08:37.098 "sock_impl_set_options", 00:08:37.098 "sock_impl_get_options", 00:08:37.098 "iobuf_get_stats", 00:08:37.098 "iobuf_set_options", 00:08:37.098 "keyring_get_keys", 00:08:37.098 "framework_get_pci_devices", 00:08:37.098 "framework_get_config", 00:08:37.098 "framework_get_subsystems", 00:08:37.098 "fsdev_set_opts", 00:08:37.098 "fsdev_get_opts", 00:08:37.098 "trace_get_info", 00:08:37.098 "trace_get_tpoint_group_mask", 00:08:37.098 "trace_disable_tpoint_group", 00:08:37.098 "trace_enable_tpoint_group", 00:08:37.098 "trace_clear_tpoint_mask", 00:08:37.098 "trace_set_tpoint_mask", 00:08:37.098 "notify_get_notifications", 00:08:37.098 "notify_get_types", 00:08:37.098 "spdk_get_version", 00:08:37.098 "rpc_get_methods" 00:08:37.098 ] 00:08:37.098 11:20:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.098 11:20:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:37.098 11:20:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57756 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57756 ']' 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57756 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57756 00:08:37.098 killing process with pid 57756 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57756' 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57756 00:08:37.098 11:20:44 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57756 00:08:39.629 ************************************ 00:08:39.629 END TEST spdkcli_tcp 00:08:39.629 ************************************ 00:08:39.629 00:08:39.629 real 0m4.091s 00:08:39.629 user 0m7.411s 00:08:39.629 sys 0m0.667s 00:08:39.629 11:20:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.629 11:20:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.629 11:20:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:39.629 11:20:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.629 11:20:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.629 11:20:47 -- common/autotest_common.sh@10 -- # set +x 00:08:39.629 ************************************ 00:08:39.629 START TEST dpdk_mem_utility 00:08:39.629 ************************************ 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:39.629 * Looking for test storage... 00:08:39.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.629 11:20:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.629 --rc genhtml_branch_coverage=1 00:08:39.629 --rc genhtml_function_coverage=1 00:08:39.629 --rc genhtml_legend=1 00:08:39.629 --rc geninfo_all_blocks=1 00:08:39.629 --rc geninfo_unexecuted_blocks=1 00:08:39.629 00:08:39.629 ' 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.629 --rc genhtml_branch_coverage=1 00:08:39.629 --rc genhtml_function_coverage=1 00:08:39.629 --rc genhtml_legend=1 00:08:39.629 --rc geninfo_all_blocks=1 00:08:39.629 --rc geninfo_unexecuted_blocks=1 00:08:39.629 00:08:39.629 ' 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.629 --rc genhtml_branch_coverage=1 00:08:39.629 --rc genhtml_function_coverage=1 00:08:39.629 --rc genhtml_legend=1 00:08:39.629 --rc geninfo_all_blocks=1 00:08:39.629 --rc geninfo_unexecuted_blocks=1 00:08:39.629 00:08:39.629 ' 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.629 --rc genhtml_branch_coverage=1 00:08:39.629 --rc genhtml_function_coverage=1 00:08:39.629 --rc genhtml_legend=1 00:08:39.629 --rc geninfo_all_blocks=1 00:08:39.629 --rc geninfo_unexecuted_blocks=1 00:08:39.629 00:08:39.629 ' 00:08:39.629 11:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:39.629 11:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57878 00:08:39.629 11:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:39.629 11:20:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57878 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57878 ']' 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.629 11:20:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:39.629 [2024-11-20 11:20:47.404374] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:39.629 [2024-11-20 11:20:47.404804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57878 ] 00:08:39.889 [2024-11-20 11:20:47.586358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.148 [2024-11-20 11:20:47.743666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.086 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.086 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:41.086 11:20:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:41.086 11:20:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:41.086 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.086 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:41.086 { 00:08:41.086 "filename": "/tmp/spdk_mem_dump.txt" 00:08:41.086 } 00:08:41.086 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.086 11:20:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:41.086 DPDK memory size 816.000000 MiB in 1 heap(s) 00:08:41.086 1 heaps totaling size 816.000000 MiB 00:08:41.086 size: 816.000000 MiB heap id: 0 00:08:41.086 end heaps---------- 00:08:41.086 9 mempools totaling size 595.772034 MiB 00:08:41.086 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:41.086 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:41.086 size: 92.545471 MiB name: bdev_io_57878 00:08:41.086 size: 50.003479 MiB name: msgpool_57878 00:08:41.086 size: 36.509338 MiB name: fsdev_io_57878 00:08:41.086 size: 21.763794 MiB name: PDU_Pool 00:08:41.086 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:41.086 size: 4.133484 MiB name: evtpool_57878 00:08:41.086 size: 0.026123 MiB name: Session_Pool 00:08:41.086 end mempools------- 00:08:41.086 6 memzones totaling size 4.142822 MiB 00:08:41.086 size: 1.000366 MiB name: RG_ring_0_57878 00:08:41.086 size: 1.000366 MiB name: RG_ring_1_57878 00:08:41.086 size: 1.000366 MiB name: RG_ring_4_57878 00:08:41.086 size: 1.000366 MiB name: RG_ring_5_57878 00:08:41.086 size: 0.125366 MiB name: RG_ring_2_57878 00:08:41.086 size: 0.015991 MiB name: RG_ring_3_57878 00:08:41.086 end memzones------- 00:08:41.086 11:20:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:41.086 heap id: 0 total size: 816.000000 MiB number of busy elements: 315 number of free elements: 18 00:08:41.086 list of free elements. size: 16.791382 MiB 00:08:41.086 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:41.086 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:41.086 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:41.086 element at address: 0x200018d00040 with size: 0.999939 MiB 00:08:41.086 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:41.086 element at address: 0x200019200000 with size: 0.999084 MiB 00:08:41.086 element at address: 0x200031e00000 with size: 0.994324 MiB 00:08:41.086 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:41.086 element at address: 0x200018a00000 with size: 0.959656 MiB 00:08:41.086 element at address: 0x200019500040 with size: 0.936401 MiB 00:08:41.086 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:41.086 element at address: 0x20001ac00000 with size: 0.561951 MiB 00:08:41.086 element at address: 0x200000c00000 with size: 0.490173 MiB 00:08:41.086 element at address: 0x200018e00000 with size: 0.487976 MiB 00:08:41.086 element at address: 0x200019600000 with size: 0.485413 MiB 00:08:41.086 element at address: 0x200012c00000 with size: 0.443237 MiB 00:08:41.086 element at address: 0x200028000000 with size: 0.390442 MiB 00:08:41.086 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:41.086 list of standard malloc elements. size: 199.287720 MiB 00:08:41.086 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:41.086 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:41.086 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:08:41.086 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:41.086 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:41.086 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:41.086 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:08:41.086 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:41.086 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:41.086 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:08:41.086 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:41.086 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:41.086 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:41.086 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:41.086 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71780 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71880 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71980 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c72080 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012c72180 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:08:41.087 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:08:41.088 element at address: 0x200028063f40 with size: 0.000244 MiB 00:08:41.088 element at address: 0x200028064040 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806af80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b080 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b180 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b280 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b380 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b480 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b580 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b680 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b780 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b880 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806b980 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806be80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c080 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c180 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c280 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c380 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c480 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c580 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c680 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c780 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c880 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806c980 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d080 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d180 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d280 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d380 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d480 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d580 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d680 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d780 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d880 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806d980 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806da80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806db80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806de80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806df80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e080 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e180 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e280 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e380 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e480 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e580 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e680 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e780 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e880 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806e980 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:08:41.088 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f080 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f180 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f280 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f380 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f480 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f580 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f680 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f780 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f880 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806f980 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:08:41.089 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:08:41.089 list of memzone associated elements. size: 599.920898 MiB 00:08:41.089 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:08:41.089 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:41.089 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:08:41.089 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:41.089 element at address: 0x200012df4740 with size: 92.045105 MiB 00:08:41.089 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57878_0 00:08:41.089 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:41.089 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57878_0 00:08:41.089 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:41.089 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57878_0 00:08:41.089 element at address: 0x2000197be900 with size: 20.255615 MiB 00:08:41.089 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:41.089 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:08:41.089 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:41.089 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:41.089 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57878_0 00:08:41.089 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:41.089 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57878 00:08:41.089 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:41.089 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57878 00:08:41.089 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:41.089 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:41.089 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:08:41.089 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:41.089 element at address: 0x200018afde00 with size: 1.008179 MiB 00:08:41.089 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:41.089 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:08:41.089 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:41.089 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:41.089 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57878 00:08:41.089 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:41.089 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57878 00:08:41.089 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:08:41.089 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57878 00:08:41.089 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:08:41.089 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57878 00:08:41.089 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:41.089 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57878 00:08:41.089 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:41.089 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57878 00:08:41.089 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:08:41.089 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:41.089 element at address: 0x200012c72280 with size: 0.500549 MiB 00:08:41.089 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:41.089 element at address: 0x20001967c440 with size: 0.250549 MiB 00:08:41.089 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:41.089 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:41.089 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57878 00:08:41.089 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:41.089 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57878 00:08:41.089 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:08:41.089 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:41.089 element at address: 0x200028064140 with size: 0.023804 MiB 00:08:41.089 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:41.089 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:41.089 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57878 00:08:41.089 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:08:41.089 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:41.089 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:41.089 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57878 00:08:41.089 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:41.089 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57878 00:08:41.089 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:41.089 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57878 00:08:41.089 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:08:41.089 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:41.089 11:20:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:41.089 11:20:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57878 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57878 ']' 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57878 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57878 00:08:41.089 killing process with pid 57878 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57878' 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57878 00:08:41.089 11:20:48 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57878 00:08:43.621 ************************************ 00:08:43.621 END TEST dpdk_mem_utility 00:08:43.621 ************************************ 00:08:43.621 00:08:43.621 real 0m3.927s 00:08:43.621 user 0m3.956s 00:08:43.621 sys 0m0.614s 00:08:43.621 11:20:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.621 11:20:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:43.621 11:20:51 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:43.621 11:20:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.621 11:20:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.621 11:20:51 -- common/autotest_common.sh@10 -- # set +x 00:08:43.621 ************************************ 00:08:43.621 START TEST event 00:08:43.621 ************************************ 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:43.621 * Looking for test storage... 00:08:43.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:43.621 11:20:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.621 11:20:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.621 11:20:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.621 11:20:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.621 11:20:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.621 11:20:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.621 11:20:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.621 11:20:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.621 11:20:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.621 11:20:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.621 11:20:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.621 11:20:51 event -- scripts/common.sh@344 -- # case "$op" in 00:08:43.621 11:20:51 event -- scripts/common.sh@345 -- # : 1 00:08:43.621 11:20:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.621 11:20:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.621 11:20:51 event -- scripts/common.sh@365 -- # decimal 1 00:08:43.621 11:20:51 event -- scripts/common.sh@353 -- # local d=1 00:08:43.621 11:20:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.621 11:20:51 event -- scripts/common.sh@355 -- # echo 1 00:08:43.621 11:20:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.621 11:20:51 event -- scripts/common.sh@366 -- # decimal 2 00:08:43.621 11:20:51 event -- scripts/common.sh@353 -- # local d=2 00:08:43.621 11:20:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.621 11:20:51 event -- scripts/common.sh@355 -- # echo 2 00:08:43.621 11:20:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.621 11:20:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.621 11:20:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.621 11:20:51 event -- scripts/common.sh@368 -- # return 0 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:43.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.621 --rc genhtml_branch_coverage=1 00:08:43.621 --rc genhtml_function_coverage=1 00:08:43.621 --rc genhtml_legend=1 00:08:43.621 --rc geninfo_all_blocks=1 00:08:43.621 --rc geninfo_unexecuted_blocks=1 00:08:43.621 00:08:43.621 ' 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:43.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.621 --rc genhtml_branch_coverage=1 00:08:43.621 --rc genhtml_function_coverage=1 00:08:43.621 --rc genhtml_legend=1 00:08:43.621 --rc geninfo_all_blocks=1 00:08:43.621 --rc geninfo_unexecuted_blocks=1 00:08:43.621 00:08:43.621 ' 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:43.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.621 --rc genhtml_branch_coverage=1 00:08:43.621 --rc genhtml_function_coverage=1 00:08:43.621 --rc genhtml_legend=1 00:08:43.621 --rc geninfo_all_blocks=1 00:08:43.621 --rc geninfo_unexecuted_blocks=1 00:08:43.621 00:08:43.621 ' 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:43.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.621 --rc genhtml_branch_coverage=1 00:08:43.621 --rc genhtml_function_coverage=1 00:08:43.621 --rc genhtml_legend=1 00:08:43.621 --rc geninfo_all_blocks=1 00:08:43.621 --rc geninfo_unexecuted_blocks=1 00:08:43.621 00:08:43.621 ' 00:08:43.621 11:20:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:43.621 11:20:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:43.621 11:20:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:43.621 11:20:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.621 11:20:51 event -- common/autotest_common.sh@10 -- # set +x 00:08:43.621 ************************************ 00:08:43.621 START TEST event_perf 00:08:43.621 ************************************ 00:08:43.621 11:20:51 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:43.621 Running I/O for 1 seconds...[2024-11-20 11:20:51.321505] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:43.621 [2024-11-20 11:20:51.322047] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57986 ] 00:08:43.881 [2024-11-20 11:20:51.503830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.881 [2024-11-20 11:20:51.692034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.881 [2024-11-20 11:20:51.692205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.881 [2024-11-20 11:20:51.692321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.881 Running I/O for 1 seconds...[2024-11-20 11:20:51.693318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.258 00:08:45.258 lcore 0: 203484 00:08:45.258 lcore 1: 203485 00:08:45.258 lcore 2: 203483 00:08:45.258 lcore 3: 203483 00:08:45.258 done. 00:08:45.258 00:08:45.258 real 0m1.661s 00:08:45.258 user 0m4.411s 00:08:45.258 sys 0m0.121s 00:08:45.258 11:20:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.258 ************************************ 00:08:45.258 END TEST event_perf 00:08:45.258 ************************************ 00:08:45.258 11:20:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:45.258 11:20:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:45.258 11:20:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:45.258 11:20:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.258 11:20:52 event -- common/autotest_common.sh@10 -- # set +x 00:08:45.258 ************************************ 00:08:45.258 START TEST event_reactor 00:08:45.258 ************************************ 00:08:45.258 11:20:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:45.258 [2024-11-20 11:20:53.039373] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:45.258 [2024-11-20 11:20:53.039545] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58031 ] 00:08:45.518 [2024-11-20 11:20:53.223711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.518 [2024-11-20 11:20:53.355366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.989 test_start 00:08:46.989 oneshot 00:08:46.989 tick 100 00:08:46.989 tick 100 00:08:46.989 tick 250 00:08:46.989 tick 100 00:08:46.989 tick 100 00:08:46.989 tick 100 00:08:46.989 tick 250 00:08:46.989 tick 500 00:08:46.989 tick 100 00:08:46.989 tick 100 00:08:46.989 tick 250 00:08:46.989 tick 100 00:08:46.989 tick 100 00:08:46.989 test_end 00:08:46.989 00:08:46.989 real 0m1.574s 00:08:46.989 user 0m1.373s 00:08:46.989 sys 0m0.093s 00:08:46.989 ************************************ 00:08:46.989 END TEST event_reactor 00:08:46.989 ************************************ 00:08:46.989 11:20:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.989 11:20:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:46.989 11:20:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:46.989 11:20:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:46.989 11:20:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.989 11:20:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:46.989 ************************************ 00:08:46.989 START TEST event_reactor_perf 00:08:46.989 ************************************ 00:08:46.989 11:20:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:46.989 [2024-11-20 11:20:54.667832] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:46.989 [2024-11-20 11:20:54.668010] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58068 ] 00:08:47.247 [2024-11-20 11:20:54.855648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.247 [2024-11-20 11:20:55.008285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.624 test_start 00:08:48.624 test_end 00:08:48.624 Performance: 279872 events per second 00:08:48.624 ************************************ 00:08:48.624 END TEST event_reactor_perf 00:08:48.624 ************************************ 00:08:48.624 00:08:48.624 real 0m1.623s 00:08:48.625 user 0m1.402s 00:08:48.625 sys 0m0.112s 00:08:48.625 11:20:56 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.625 11:20:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:48.625 11:20:56 event -- event/event.sh@49 -- # uname -s 00:08:48.625 11:20:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:48.625 11:20:56 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:48.625 11:20:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.625 11:20:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.625 11:20:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:48.625 ************************************ 00:08:48.625 START TEST event_scheduler 00:08:48.625 ************************************ 00:08:48.625 11:20:56 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:48.625 * Looking for test storage... 00:08:48.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:48.625 11:20:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.625 11:20:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.625 11:20:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.884 11:20:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.884 --rc genhtml_branch_coverage=1 00:08:48.884 --rc genhtml_function_coverage=1 00:08:48.884 --rc genhtml_legend=1 00:08:48.884 --rc geninfo_all_blocks=1 00:08:48.884 --rc geninfo_unexecuted_blocks=1 00:08:48.884 00:08:48.884 ' 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.884 --rc genhtml_branch_coverage=1 00:08:48.884 --rc genhtml_function_coverage=1 00:08:48.884 --rc genhtml_legend=1 00:08:48.884 --rc geninfo_all_blocks=1 00:08:48.884 --rc geninfo_unexecuted_blocks=1 00:08:48.884 00:08:48.884 ' 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.884 --rc genhtml_branch_coverage=1 00:08:48.884 --rc genhtml_function_coverage=1 00:08:48.884 --rc genhtml_legend=1 00:08:48.884 --rc geninfo_all_blocks=1 00:08:48.884 --rc geninfo_unexecuted_blocks=1 00:08:48.884 00:08:48.884 ' 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.884 --rc genhtml_branch_coverage=1 00:08:48.884 --rc genhtml_function_coverage=1 00:08:48.884 --rc genhtml_legend=1 00:08:48.884 --rc geninfo_all_blocks=1 00:08:48.884 --rc geninfo_unexecuted_blocks=1 00:08:48.884 00:08:48.884 ' 00:08:48.884 11:20:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:48.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.884 11:20:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58138 00:08:48.884 11:20:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:48.884 11:20:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58138 00:08:48.884 11:20:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58138 ']' 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.884 11:20:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:48.884 [2024-11-20 11:20:56.593804] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:48.884 [2024-11-20 11:20:56.594343] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58138 ] 00:08:49.143 [2024-11-20 11:20:56.780478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.143 [2024-11-20 11:20:56.944250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.143 [2024-11-20 11:20:56.944371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.143 [2024-11-20 11:20:56.944475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.143 [2024-11-20 11:20:56.944480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:50.080 11:20:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.080 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.080 POWER: Cannot set governor of lcore 0 to performance 00:08:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.080 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.080 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.080 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:50.080 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:50.080 POWER: Unable to set Power Management Environment for lcore 0 00:08:50.080 [2024-11-20 11:20:57.558848] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:50.080 [2024-11-20 11:20:57.558876] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:50.080 [2024-11-20 11:20:57.558891] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:50.080 [2024-11-20 11:20:57.558918] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:50.080 [2024-11-20 11:20:57.558931] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:50.080 [2024-11-20 11:20:57.558950] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.080 11:20:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.080 [2024-11-20 11:20:57.888565] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.080 11:20:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.080 11:20:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.080 ************************************ 00:08:50.080 START TEST scheduler_create_thread 00:08:50.080 ************************************ 00:08:50.080 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:50.080 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:50.080 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.080 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.080 2 00:08:50.080 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.080 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:50.081 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.081 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.081 3 00:08:50.081 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.081 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:50.081 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.081 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 4 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 5 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 6 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 7 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 8 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 9 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 10 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.340 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:50.341 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:50.341 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.341 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.341 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.341 11:20:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:50.341 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.341 11:20:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.341 ************************************ 00:08:50.341 END TEST scheduler_create_thread 00:08:50.341 ************************************ 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.341 00:08:50.341 real 0m0.107s 00:08:50.341 user 0m0.015s 00:08:50.341 sys 0m0.004s 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.341 11:20:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.341 11:20:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:50.341 11:20:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58138 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58138 ']' 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58138 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58138 00:08:50.341 killing process with pid 58138 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58138' 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58138 00:08:50.341 11:20:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58138 00:08:50.909 [2024-11-20 11:20:58.495373] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:51.910 ************************************ 00:08:51.910 END TEST event_scheduler 00:08:51.910 ************************************ 00:08:51.910 00:08:51.910 real 0m3.268s 00:08:51.910 user 0m5.192s 00:08:51.910 sys 0m0.497s 00:08:51.910 11:20:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.910 11:20:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:51.910 11:20:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:51.910 11:20:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:51.910 11:20:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.910 11:20:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.910 11:20:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.910 ************************************ 00:08:51.910 START TEST app_repeat 00:08:51.910 ************************************ 00:08:51.910 11:20:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:51.910 Process app_repeat pid: 58227 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58227 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58227' 00:08:51.910 spdk_app_start Round 0 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:51.910 11:20:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58227 /var/tmp/spdk-nbd.sock 00:08:51.910 11:20:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58227 ']' 00:08:51.910 11:20:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:51.910 11:20:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:51.910 11:20:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:51.910 11:20:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.910 11:20:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:51.910 [2024-11-20 11:20:59.693737] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:08:51.910 [2024-11-20 11:20:59.693943] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58227 ] 00:08:52.168 [2024-11-20 11:20:59.877589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:52.427 [2024-11-20 11:21:00.013582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.427 [2024-11-20 11:21:00.013593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.995 11:21:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.995 11:21:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:52.995 11:21:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:53.254 Malloc0 00:08:53.254 11:21:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:53.822 Malloc1 00:08:53.822 11:21:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:53.822 11:21:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:54.081 /dev/nbd0 00:08:54.081 11:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:54.081 11:21:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:54.081 1+0 records in 00:08:54.081 1+0 records out 00:08:54.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362933 s, 11.3 MB/s 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.081 11:21:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:54.082 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.082 11:21:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.082 11:21:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:54.340 /dev/nbd1 00:08:54.340 11:21:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:54.341 11:21:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:54.341 1+0 records in 00:08:54.341 1+0 records out 00:08:54.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251121 s, 16.3 MB/s 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.341 11:21:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:54.341 11:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.341 11:21:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.341 11:21:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.341 11:21:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.341 11:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:54.600 { 00:08:54.600 "nbd_device": "/dev/nbd0", 00:08:54.600 "bdev_name": "Malloc0" 00:08:54.600 }, 00:08:54.600 { 00:08:54.600 "nbd_device": "/dev/nbd1", 00:08:54.600 "bdev_name": "Malloc1" 00:08:54.600 } 00:08:54.600 ]' 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:54.600 { 00:08:54.600 "nbd_device": "/dev/nbd0", 00:08:54.600 "bdev_name": "Malloc0" 00:08:54.600 }, 00:08:54.600 { 00:08:54.600 "nbd_device": "/dev/nbd1", 00:08:54.600 "bdev_name": "Malloc1" 00:08:54.600 } 00:08:54.600 ]' 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:54.600 /dev/nbd1' 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:54.600 /dev/nbd1' 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:54.600 256+0 records in 00:08:54.600 256+0 records out 00:08:54.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00918315 s, 114 MB/s 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:54.600 256+0 records in 00:08:54.600 256+0 records out 00:08:54.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274073 s, 38.3 MB/s 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.600 11:21:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:54.859 256+0 records in 00:08:54.859 256+0 records out 00:08:54.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298535 s, 35.1 MB/s 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.859 11:21:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.117 11:21:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.375 11:21:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:55.635 11:21:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:55.635 11:21:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:56.258 11:21:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:57.195 [2024-11-20 11:21:04.918742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.453 [2024-11-20 11:21:05.047968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.453 [2024-11-20 11:21:05.047972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.453 [2024-11-20 11:21:05.242142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:57.453 [2024-11-20 11:21:05.242253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:59.357 11:21:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:59.357 spdk_app_start Round 1 00:08:59.357 11:21:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:59.357 11:21:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58227 /var/tmp/spdk-nbd.sock 00:08:59.357 11:21:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58227 ']' 00:08:59.357 11:21:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:59.357 11:21:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:59.357 11:21:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:59.357 11:21:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.357 11:21:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:59.357 11:21:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.357 11:21:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:59.357 11:21:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:59.923 Malloc0 00:08:59.923 11:21:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:00.182 Malloc1 00:09:00.182 11:21:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.182 11:21:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:00.440 /dev/nbd0 00:09:00.440 11:21:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:00.440 11:21:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:00.440 1+0 records in 00:09:00.440 1+0 records out 00:09:00.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261432 s, 15.7 MB/s 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:00.440 11:21:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:00.440 11:21:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.440 11:21:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:00.440 11:21:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:00.699 /dev/nbd1 00:09:00.699 11:21:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:00.699 11:21:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:00.699 1+0 records in 00:09:00.699 1+0 records out 00:09:00.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427832 s, 9.6 MB/s 00:09:00.699 11:21:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:01.019 11:21:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:01.019 11:21:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:01.019 11:21:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.019 11:21:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:01.019 11:21:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.019 11:21:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:01.019 11:21:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.019 11:21:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.019 11:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:01.277 { 00:09:01.277 "nbd_device": "/dev/nbd0", 00:09:01.277 "bdev_name": "Malloc0" 00:09:01.277 }, 00:09:01.277 { 00:09:01.277 "nbd_device": "/dev/nbd1", 00:09:01.277 "bdev_name": "Malloc1" 00:09:01.277 } 00:09:01.277 ]' 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:01.277 { 00:09:01.277 "nbd_device": "/dev/nbd0", 00:09:01.277 "bdev_name": "Malloc0" 00:09:01.277 }, 00:09:01.277 { 00:09:01.277 "nbd_device": "/dev/nbd1", 00:09:01.277 "bdev_name": "Malloc1" 00:09:01.277 } 00:09:01.277 ]' 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:01.277 /dev/nbd1' 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:01.277 /dev/nbd1' 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:01.277 11:21:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:01.277 256+0 records in 00:09:01.277 256+0 records out 00:09:01.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.008338 s, 126 MB/s 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:01.277 256+0 records in 00:09:01.277 256+0 records out 00:09:01.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270636 s, 38.7 MB/s 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:01.277 256+0 records in 00:09:01.277 256+0 records out 00:09:01.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314634 s, 33.3 MB/s 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.277 11:21:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.534 11:21:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.098 11:21:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:02.355 11:21:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:02.355 11:21:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:02.922 11:21:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:04.298 [2024-11-20 11:21:11.728548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.298 [2024-11-20 11:21:11.877287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.298 [2024-11-20 11:21:11.877293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.298 [2024-11-20 11:21:12.092631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:04.298 [2024-11-20 11:21:12.092773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:06.201 spdk_app_start Round 2 00:09:06.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:06.201 11:21:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:06.201 11:21:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:06.201 11:21:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58227 /var/tmp/spdk-nbd.sock 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58227 ']' 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.201 11:21:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:06.201 11:21:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:06.460 Malloc0 00:09:06.460 11:21:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:06.719 Malloc1 00:09:06.719 11:21:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:06.719 11:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:06.978 /dev/nbd0 00:09:07.237 11:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:07.237 11:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:07.237 1+0 records in 00:09:07.237 1+0 records out 00:09:07.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253398 s, 16.2 MB/s 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:07.237 11:21:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:07.237 11:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:07.237 11:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:07.237 11:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:07.496 /dev/nbd1 00:09:07.496 11:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:07.496 11:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:07.496 1+0 records in 00:09:07.496 1+0 records out 00:09:07.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278571 s, 14.7 MB/s 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:07.496 11:21:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:07.496 11:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:07.496 11:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:07.496 11:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.496 11:21:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.496 11:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:07.754 { 00:09:07.754 "nbd_device": "/dev/nbd0", 00:09:07.754 "bdev_name": "Malloc0" 00:09:07.754 }, 00:09:07.754 { 00:09:07.754 "nbd_device": "/dev/nbd1", 00:09:07.754 "bdev_name": "Malloc1" 00:09:07.754 } 00:09:07.754 ]' 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:07.754 { 00:09:07.754 "nbd_device": "/dev/nbd0", 00:09:07.754 "bdev_name": "Malloc0" 00:09:07.754 }, 00:09:07.754 { 00:09:07.754 "nbd_device": "/dev/nbd1", 00:09:07.754 "bdev_name": "Malloc1" 00:09:07.754 } 00:09:07.754 ]' 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:07.754 /dev/nbd1' 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:07.754 /dev/nbd1' 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:07.754 256+0 records in 00:09:07.754 256+0 records out 00:09:07.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743649 s, 141 MB/s 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.754 11:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:08.013 256+0 records in 00:09:08.013 256+0 records out 00:09:08.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239191 s, 43.8 MB/s 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:08.013 256+0 records in 00:09:08.013 256+0 records out 00:09:08.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326604 s, 32.1 MB/s 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.013 11:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.272 11:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:08.531 11:21:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.532 11:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:09.100 11:21:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:09.100 11:21:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:09.358 11:21:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:10.734 [2024-11-20 11:21:18.309963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:10.734 [2024-11-20 11:21:18.429479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.734 [2024-11-20 11:21:18.429494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.993 [2024-11-20 11:21:18.622494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:10.993 [2024-11-20 11:21:18.622610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:12.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:12.439 11:21:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58227 /var/tmp/spdk-nbd.sock 00:09:12.439 11:21:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58227 ']' 00:09:12.439 11:21:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:12.439 11:21:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.439 11:21:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:12.439 11:21:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.439 11:21:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:12.697 11:21:20 event.app_repeat -- event/event.sh@39 -- # killprocess 58227 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58227 ']' 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58227 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58227 00:09:12.697 killing process with pid 58227 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58227' 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58227 00:09:12.697 11:21:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58227 00:09:13.632 spdk_app_start is called in Round 0. 00:09:13.632 Shutdown signal received, stop current app iteration 00:09:13.632 Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 reinitialization... 00:09:13.632 spdk_app_start is called in Round 1. 00:09:13.632 Shutdown signal received, stop current app iteration 00:09:13.632 Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 reinitialization... 00:09:13.632 spdk_app_start is called in Round 2. 00:09:13.632 Shutdown signal received, stop current app iteration 00:09:13.632 Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 reinitialization... 00:09:13.632 spdk_app_start is called in Round 3. 00:09:13.632 Shutdown signal received, stop current app iteration 00:09:13.632 11:21:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:13.632 11:21:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:13.632 00:09:13.632 real 0m21.807s 00:09:13.632 user 0m48.154s 00:09:13.632 sys 0m3.186s 00:09:13.632 11:21:21 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.632 11:21:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:13.632 ************************************ 00:09:13.632 END TEST app_repeat 00:09:13.632 ************************************ 00:09:13.890 11:21:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:13.891 11:21:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:13.891 11:21:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.891 11:21:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.891 11:21:21 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.891 ************************************ 00:09:13.891 START TEST cpu_locks 00:09:13.891 ************************************ 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:13.891 * Looking for test storage... 00:09:13.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.891 11:21:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.891 --rc genhtml_branch_coverage=1 00:09:13.891 --rc genhtml_function_coverage=1 00:09:13.891 --rc genhtml_legend=1 00:09:13.891 --rc geninfo_all_blocks=1 00:09:13.891 --rc geninfo_unexecuted_blocks=1 00:09:13.891 00:09:13.891 ' 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.891 --rc genhtml_branch_coverage=1 00:09:13.891 --rc genhtml_function_coverage=1 00:09:13.891 --rc genhtml_legend=1 00:09:13.891 --rc geninfo_all_blocks=1 00:09:13.891 --rc geninfo_unexecuted_blocks=1 00:09:13.891 00:09:13.891 ' 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.891 --rc genhtml_branch_coverage=1 00:09:13.891 --rc genhtml_function_coverage=1 00:09:13.891 --rc genhtml_legend=1 00:09:13.891 --rc geninfo_all_blocks=1 00:09:13.891 --rc geninfo_unexecuted_blocks=1 00:09:13.891 00:09:13.891 ' 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:13.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.891 --rc genhtml_branch_coverage=1 00:09:13.891 --rc genhtml_function_coverage=1 00:09:13.891 --rc genhtml_legend=1 00:09:13.891 --rc geninfo_all_blocks=1 00:09:13.891 --rc geninfo_unexecuted_blocks=1 00:09:13.891 00:09:13.891 ' 00:09:13.891 11:21:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:13.891 11:21:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:13.891 11:21:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:13.891 11:21:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.891 11:21:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:13.891 ************************************ 00:09:13.891 START TEST default_locks 00:09:13.891 ************************************ 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58697 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58697 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58697 ']' 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.891 11:21:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:14.149 [2024-11-20 11:21:21.825930] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:14.149 [2024-11-20 11:21:21.826328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58697 ] 00:09:14.407 [2024-11-20 11:21:21.996551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.407 [2024-11-20 11:21:22.123947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.344 11:21:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.344 11:21:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:15.344 11:21:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58697 00:09:15.344 11:21:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58697 00:09:15.344 11:21:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:15.602 11:21:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58697 00:09:15.602 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58697 ']' 00:09:15.602 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58697 00:09:15.602 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:15.602 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.602 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58697 00:09:15.860 killing process with pid 58697 00:09:15.860 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.860 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.860 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58697' 00:09:15.860 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58697 00:09:15.860 11:21:23 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58697 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58697 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58697 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:18.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.409 ERROR: process (pid: 58697) is no longer running 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58697 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58697 ']' 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58697) - No such process 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:18.409 00:09:18.409 real 0m3.986s 00:09:18.409 user 0m4.024s 00:09:18.409 sys 0m0.728s 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.409 11:21:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 ************************************ 00:09:18.409 END TEST default_locks 00:09:18.409 ************************************ 00:09:18.409 11:21:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:18.409 11:21:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.409 11:21:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.409 11:21:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 ************************************ 00:09:18.409 START TEST default_locks_via_rpc 00:09:18.409 ************************************ 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58772 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58772 00:09:18.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58772 ']' 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.409 11:21:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 [2024-11-20 11:21:25.868719] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:18.410 [2024-11-20 11:21:25.868881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58772 ] 00:09:18.410 [2024-11-20 11:21:26.048842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.410 [2024-11-20 11:21:26.201959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58772 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58772 00:09:19.345 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58772 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58772 ']' 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58772 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58772 00:09:19.912 killing process with pid 58772 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58772' 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58772 00:09:19.912 11:21:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58772 00:09:22.443 00:09:22.443 real 0m4.097s 00:09:22.443 user 0m4.136s 00:09:22.443 sys 0m0.769s 00:09:22.443 11:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.443 11:21:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.443 ************************************ 00:09:22.443 END TEST default_locks_via_rpc 00:09:22.443 ************************************ 00:09:22.443 11:21:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:22.443 11:21:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.443 11:21:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.443 11:21:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.443 ************************************ 00:09:22.443 START TEST non_locking_app_on_locked_coremask 00:09:22.443 ************************************ 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58846 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58846 /var/tmp/spdk.sock 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58846 ']' 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.443 11:21:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:22.443 [2024-11-20 11:21:30.020139] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:22.443 [2024-11-20 11:21:30.020559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58846 ] 00:09:22.443 [2024-11-20 11:21:30.211278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.702 [2024-11-20 11:21:30.368727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58866 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58866 /var/tmp/spdk2.sock 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58866 ']' 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:23.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.638 11:21:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:23.638 [2024-11-20 11:21:31.366973] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:23.638 [2024-11-20 11:21:31.367144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58866 ] 00:09:23.897 [2024-11-20 11:21:31.569872] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:23.897 [2024-11-20 11:21:31.569959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.155 [2024-11-20 11:21:31.828278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.702 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.702 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:26.702 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58846 00:09:26.702 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58846 00:09:26.702 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58846 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58846 ']' 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58846 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58846 00:09:27.268 killing process with pid 58846 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58846' 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58846 00:09:27.268 11:21:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58846 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58866 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58866 ']' 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58866 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58866 00:09:31.455 killing process with pid 58866 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58866' 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58866 00:09:31.455 11:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58866 00:09:33.991 ************************************ 00:09:33.991 END TEST non_locking_app_on_locked_coremask 00:09:33.991 ************************************ 00:09:33.991 00:09:33.991 real 0m11.575s 00:09:33.991 user 0m12.107s 00:09:33.991 sys 0m1.480s 00:09:33.991 11:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.991 11:21:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.991 11:21:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:33.991 11:21:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.991 11:21:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.991 11:21:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:33.991 ************************************ 00:09:33.991 START TEST locking_app_on_unlocked_coremask 00:09:33.991 ************************************ 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59014 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59014 /var/tmp/spdk.sock 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59014 ']' 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.991 11:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:33.991 [2024-11-20 11:21:41.613457] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:33.991 [2024-11-20 11:21:41.613607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59014 ] 00:09:33.991 [2024-11-20 11:21:41.823257] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:33.991 [2024-11-20 11:21:41.823323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.249 [2024-11-20 11:21:41.952804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59041 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59041 /var/tmp/spdk2.sock 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59041 ']' 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:35.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.183 11:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:35.183 [2024-11-20 11:21:42.922980] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:35.183 [2024-11-20 11:21:42.923135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:09:35.441 [2024-11-20 11:21:43.116970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.699 [2024-11-20 11:21:43.375483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.229 11:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.229 11:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:38.229 11:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59041 00:09:38.229 11:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59041 00:09:38.229 11:21:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59014 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59014 ']' 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59014 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59014 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.794 killing process with pid 59014 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59014' 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59014 00:09:38.794 11:21:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59014 00:09:44.056 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59041 00:09:44.056 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59041 ']' 00:09:44.056 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59041 00:09:44.056 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:44.056 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.056 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59041 00:09:44.057 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.057 killing process with pid 59041 00:09:44.057 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.057 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59041' 00:09:44.057 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59041 00:09:44.057 11:21:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59041 00:09:45.434 00:09:45.434 real 0m11.632s 00:09:45.434 user 0m12.162s 00:09:45.434 sys 0m1.363s 00:09:45.434 ************************************ 00:09:45.434 END TEST locking_app_on_unlocked_coremask 00:09:45.434 ************************************ 00:09:45.434 11:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.434 11:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.434 11:21:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:45.434 11:21:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.435 11:21:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.435 11:21:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:45.435 ************************************ 00:09:45.435 START TEST locking_app_on_locked_coremask 00:09:45.435 ************************************ 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59189 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59189 /var/tmp/spdk.sock 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59189 ']' 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.435 11:21:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.698 [2024-11-20 11:21:53.329314] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:45.698 [2024-11-20 11:21:53.329607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59189 ] 00:09:45.698 [2024-11-20 11:21:53.516180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.957 [2024-11-20 11:21:53.649017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59206 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59206 /var/tmp/spdk2.sock 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59206 /var/tmp/spdk2.sock 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59206 /var/tmp/spdk2.sock 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59206 ']' 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.893 11:21:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:46.893 [2024-11-20 11:21:54.703504] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:46.893 [2024-11-20 11:21:54.703673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:09:47.152 [2024-11-20 11:21:54.897172] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59189 has claimed it. 00:09:47.152 [2024-11-20 11:21:54.897263] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:47.719 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59206) - No such process 00:09:47.719 ERROR: process (pid: 59206) is no longer running 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59189 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:47.719 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59189 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59189 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59189 ']' 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59189 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59189 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.979 killing process with pid 59189 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59189' 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59189 00:09:47.979 11:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59189 00:09:50.511 00:09:50.511 real 0m4.816s 00:09:50.511 user 0m5.157s 00:09:50.511 sys 0m0.881s 00:09:50.511 11:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.511 ************************************ 00:09:50.511 END TEST locking_app_on_locked_coremask 00:09:50.511 ************************************ 00:09:50.511 11:21:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.511 11:21:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:50.511 11:21:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.511 11:21:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.511 11:21:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.511 ************************************ 00:09:50.511 START TEST locking_overlapped_coremask 00:09:50.511 ************************************ 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59276 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59276 /var/tmp/spdk.sock 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59276 ']' 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.511 11:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.511 [2024-11-20 11:21:58.176478] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:50.511 [2024-11-20 11:21:58.176673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59276 ] 00:09:50.511 [2024-11-20 11:21:58.352597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.770 [2024-11-20 11:21:58.492103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.770 [2024-11-20 11:21:58.492415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.770 [2024-11-20 11:21:58.492441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59294 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59294 /var/tmp/spdk2.sock 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59294 /var/tmp/spdk2.sock 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59294 /var/tmp/spdk2.sock 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59294 ']' 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.705 11:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.705 [2024-11-20 11:21:59.476976] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:51.705 [2024-11-20 11:21:59.477132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:09:51.963 [2024-11-20 11:21:59.674398] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59276 has claimed it. 00:09:51.963 [2024-11-20 11:21:59.674495] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:52.530 ERROR: process (pid: 59294) is no longer running 00:09:52.530 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59294) - No such process 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:52.530 11:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59276 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59276 ']' 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59276 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59276 00:09:52.531 killing process with pid 59276 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59276' 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59276 00:09:52.531 11:22:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59276 00:09:55.060 00:09:55.060 real 0m4.461s 00:09:55.060 user 0m12.219s 00:09:55.060 sys 0m0.701s 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.060 ************************************ 00:09:55.060 END TEST locking_overlapped_coremask 00:09:55.060 ************************************ 00:09:55.060 11:22:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:55.060 11:22:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.060 11:22:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.060 11:22:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:55.060 ************************************ 00:09:55.060 START TEST locking_overlapped_coremask_via_rpc 00:09:55.060 ************************************ 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59360 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59360 /var/tmp/spdk.sock 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59360 ']' 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.060 11:22:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.060 [2024-11-20 11:22:02.690373] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:55.060 [2024-11-20 11:22:02.690533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59360 ] 00:09:55.060 [2024-11-20 11:22:02.873683] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:55.060 [2024-11-20 11:22:02.873767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.319 [2024-11-20 11:22:03.037336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.319 [2024-11-20 11:22:03.037476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.319 [2024-11-20 11:22:03.037485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59383 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59383 /var/tmp/spdk2.sock 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59383 ']' 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.274 11:22:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.274 [2024-11-20 11:22:04.056089] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:09:56.274 [2024-11-20 11:22:04.056507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59383 ] 00:09:56.554 [2024-11-20 11:22:04.262252] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:56.554 [2024-11-20 11:22:04.262349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.826 [2024-11-20 11:22:04.541338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.826 [2024-11-20 11:22:04.541445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.826 [2024-11-20 11:22:04.541464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.355 [2024-11-20 11:22:06.878863] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59360 has claimed it. 00:09:59.355 request: 00:09:59.355 { 00:09:59.355 "method": "framework_enable_cpumask_locks", 00:09:59.355 "req_id": 1 00:09:59.355 } 00:09:59.355 Got JSON-RPC error response 00:09:59.355 response: 00:09:59.355 { 00:09:59.355 "code": -32603, 00:09:59.355 "message": "Failed to claim CPU core: 2" 00:09:59.355 } 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59360 /var/tmp/spdk.sock 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59360 ']' 00:09:59.355 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.356 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.356 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.356 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.356 11:22:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59383 /var/tmp/spdk2.sock 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59383 ']' 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:59.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.356 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:59.615 ************************************ 00:09:59.615 END TEST locking_overlapped_coremask_via_rpc 00:09:59.615 ************************************ 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:59.615 00:09:59.615 real 0m4.861s 00:09:59.615 user 0m1.751s 00:09:59.615 sys 0m0.260s 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.615 11:22:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.875 11:22:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:59.875 11:22:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59360 ]] 00:09:59.875 11:22:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59360 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59360 ']' 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59360 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59360 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.875 killing process with pid 59360 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59360' 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59360 00:09:59.875 11:22:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59360 00:10:02.410 11:22:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59383 ]] 00:10:02.410 11:22:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59383 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59383 ']' 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59383 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59383 00:10:02.410 killing process with pid 59383 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59383' 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59383 00:10:02.410 11:22:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59383 00:10:04.316 11:22:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:04.316 11:22:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:04.316 11:22:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59360 ]] 00:10:04.316 11:22:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59360 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59360 ']' 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59360 00:10:04.316 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59360) - No such process 00:10:04.316 Process with pid 59360 is not found 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59360 is not found' 00:10:04.316 11:22:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59383 ]] 00:10:04.316 Process with pid 59383 is not found 00:10:04.316 11:22:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59383 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59383 ']' 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59383 00:10:04.316 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59383) - No such process 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59383 is not found' 00:10:04.316 11:22:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:04.316 ************************************ 00:10:04.316 END TEST cpu_locks 00:10:04.316 ************************************ 00:10:04.316 00:10:04.316 real 0m50.527s 00:10:04.316 user 1m27.689s 00:10:04.316 sys 0m7.408s 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.316 11:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:04.316 ************************************ 00:10:04.316 END TEST event 00:10:04.316 ************************************ 00:10:04.316 00:10:04.316 real 1m20.964s 00:10:04.316 user 2m28.428s 00:10:04.316 sys 0m11.688s 00:10:04.316 11:22:12 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.316 11:22:12 event -- common/autotest_common.sh@10 -- # set +x 00:10:04.316 11:22:12 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:04.316 11:22:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.316 11:22:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.316 11:22:12 -- common/autotest_common.sh@10 -- # set +x 00:10:04.316 ************************************ 00:10:04.316 START TEST thread 00:10:04.316 ************************************ 00:10:04.316 11:22:12 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:04.575 * Looking for test storage... 00:10:04.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:04.575 11:22:12 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.575 11:22:12 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.575 11:22:12 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.575 11:22:12 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.575 11:22:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.575 11:22:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.575 11:22:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.575 11:22:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.575 11:22:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.575 11:22:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.575 11:22:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.575 11:22:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.575 11:22:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.575 11:22:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.575 11:22:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.575 11:22:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:04.575 11:22:12 thread -- scripts/common.sh@345 -- # : 1 00:10:04.575 11:22:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.575 11:22:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.575 11:22:12 thread -- scripts/common.sh@365 -- # decimal 1 00:10:04.575 11:22:12 thread -- scripts/common.sh@353 -- # local d=1 00:10:04.575 11:22:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.575 11:22:12 thread -- scripts/common.sh@355 -- # echo 1 00:10:04.575 11:22:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.575 11:22:12 thread -- scripts/common.sh@366 -- # decimal 2 00:10:04.575 11:22:12 thread -- scripts/common.sh@353 -- # local d=2 00:10:04.575 11:22:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.575 11:22:12 thread -- scripts/common.sh@355 -- # echo 2 00:10:04.575 11:22:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.575 11:22:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.575 11:22:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.575 11:22:12 thread -- scripts/common.sh@368 -- # return 0 00:10:04.575 11:22:12 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.575 11:22:12 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.575 --rc genhtml_branch_coverage=1 00:10:04.575 --rc genhtml_function_coverage=1 00:10:04.575 --rc genhtml_legend=1 00:10:04.575 --rc geninfo_all_blocks=1 00:10:04.576 --rc geninfo_unexecuted_blocks=1 00:10:04.576 00:10:04.576 ' 00:10:04.576 11:22:12 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.576 --rc genhtml_branch_coverage=1 00:10:04.576 --rc genhtml_function_coverage=1 00:10:04.576 --rc genhtml_legend=1 00:10:04.576 --rc geninfo_all_blocks=1 00:10:04.576 --rc geninfo_unexecuted_blocks=1 00:10:04.576 00:10:04.576 ' 00:10:04.576 11:22:12 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.576 --rc genhtml_branch_coverage=1 00:10:04.576 --rc genhtml_function_coverage=1 00:10:04.576 --rc genhtml_legend=1 00:10:04.576 --rc geninfo_all_blocks=1 00:10:04.576 --rc geninfo_unexecuted_blocks=1 00:10:04.576 00:10:04.576 ' 00:10:04.576 11:22:12 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.576 --rc genhtml_branch_coverage=1 00:10:04.576 --rc genhtml_function_coverage=1 00:10:04.576 --rc genhtml_legend=1 00:10:04.576 --rc geninfo_all_blocks=1 00:10:04.576 --rc geninfo_unexecuted_blocks=1 00:10:04.576 00:10:04.576 ' 00:10:04.576 11:22:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:04.576 11:22:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:04.576 11:22:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.576 11:22:12 thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.576 ************************************ 00:10:04.576 START TEST thread_poller_perf 00:10:04.576 ************************************ 00:10:04.576 11:22:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:04.576 [2024-11-20 11:22:12.325491] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:04.576 [2024-11-20 11:22:12.325971] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59585 ] 00:10:04.834 [2024-11-20 11:22:12.525420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.095 [2024-11-20 11:22:12.681032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.095 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:06.472 [2024-11-20T11:22:14.318Z] ====================================== 00:10:06.472 [2024-11-20T11:22:14.318Z] busy:2213072916 (cyc) 00:10:06.472 [2024-11-20T11:22:14.318Z] total_run_count: 286000 00:10:06.472 [2024-11-20T11:22:14.318Z] tsc_hz: 2200000000 (cyc) 00:10:06.472 [2024-11-20T11:22:14.318Z] ====================================== 00:10:06.472 [2024-11-20T11:22:14.318Z] poller_cost: 7738 (cyc), 3517 (nsec) 00:10:06.472 00:10:06.472 real 0m1.649s 00:10:06.472 user 0m1.420s 00:10:06.472 sys 0m0.117s 00:10:06.472 11:22:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.472 ************************************ 00:10:06.472 END TEST thread_poller_perf 00:10:06.472 ************************************ 00:10:06.472 11:22:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:06.472 11:22:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:06.472 11:22:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:06.472 11:22:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.472 11:22:13 thread -- common/autotest_common.sh@10 -- # set +x 00:10:06.472 ************************************ 00:10:06.472 START TEST thread_poller_perf 00:10:06.472 ************************************ 00:10:06.472 11:22:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:06.472 [2024-11-20 11:22:14.016358] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:06.472 [2024-11-20 11:22:14.016714] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59616 ] 00:10:06.472 [2024-11-20 11:22:14.192794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.731 [2024-11-20 11:22:14.321799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.731 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:08.109 [2024-11-20T11:22:15.955Z] ====================================== 00:10:08.109 [2024-11-20T11:22:15.955Z] busy:2203935080 (cyc) 00:10:08.109 [2024-11-20T11:22:15.955Z] total_run_count: 3804000 00:10:08.109 [2024-11-20T11:22:15.955Z] tsc_hz: 2200000000 (cyc) 00:10:08.109 [2024-11-20T11:22:15.955Z] ====================================== 00:10:08.109 [2024-11-20T11:22:15.955Z] poller_cost: 579 (cyc), 263 (nsec) 00:10:08.109 00:10:08.109 real 0m1.578s 00:10:08.109 user 0m1.370s 00:10:08.109 sys 0m0.098s 00:10:08.109 11:22:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.109 11:22:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:08.109 ************************************ 00:10:08.109 END TEST thread_poller_perf 00:10:08.109 ************************************ 00:10:08.109 11:22:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:08.109 00:10:08.109 real 0m3.492s 00:10:08.109 user 0m2.923s 00:10:08.109 sys 0m0.347s 00:10:08.109 11:22:15 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.109 11:22:15 thread -- common/autotest_common.sh@10 -- # set +x 00:10:08.109 ************************************ 00:10:08.109 END TEST thread 00:10:08.109 ************************************ 00:10:08.109 11:22:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:08.109 11:22:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:08.109 11:22:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.109 11:22:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.109 11:22:15 -- common/autotest_common.sh@10 -- # set +x 00:10:08.109 ************************************ 00:10:08.109 START TEST app_cmdline 00:10:08.109 ************************************ 00:10:08.109 11:22:15 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:08.109 * Looking for test storage... 00:10:08.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:08.109 11:22:15 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.109 11:22:15 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.109 11:22:15 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.109 11:22:15 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.109 11:22:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.110 11:22:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.110 --rc genhtml_branch_coverage=1 00:10:08.110 --rc genhtml_function_coverage=1 00:10:08.110 --rc genhtml_legend=1 00:10:08.110 --rc geninfo_all_blocks=1 00:10:08.110 --rc geninfo_unexecuted_blocks=1 00:10:08.110 00:10:08.110 ' 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.110 --rc genhtml_branch_coverage=1 00:10:08.110 --rc genhtml_function_coverage=1 00:10:08.110 --rc genhtml_legend=1 00:10:08.110 --rc geninfo_all_blocks=1 00:10:08.110 --rc geninfo_unexecuted_blocks=1 00:10:08.110 00:10:08.110 ' 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.110 --rc genhtml_branch_coverage=1 00:10:08.110 --rc genhtml_function_coverage=1 00:10:08.110 --rc genhtml_legend=1 00:10:08.110 --rc geninfo_all_blocks=1 00:10:08.110 --rc geninfo_unexecuted_blocks=1 00:10:08.110 00:10:08.110 ' 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.110 --rc genhtml_branch_coverage=1 00:10:08.110 --rc genhtml_function_coverage=1 00:10:08.110 --rc genhtml_legend=1 00:10:08.110 --rc geninfo_all_blocks=1 00:10:08.110 --rc geninfo_unexecuted_blocks=1 00:10:08.110 00:10:08.110 ' 00:10:08.110 11:22:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:08.110 11:22:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59705 00:10:08.110 11:22:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59705 00:10:08.110 11:22:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59705 ']' 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.110 11:22:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:08.369 [2024-11-20 11:22:15.955379] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:08.369 [2024-11-20 11:22:15.955568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:10:08.369 [2024-11-20 11:22:16.140608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.628 [2024-11-20 11:22:16.265661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.563 11:22:17 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.563 11:22:17 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:09.563 11:22:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:09.563 { 00:10:09.563 "version": "SPDK v25.01-pre git sha1 c0b2ac5c9", 00:10:09.563 "fields": { 00:10:09.563 "major": 25, 00:10:09.563 "minor": 1, 00:10:09.563 "patch": 0, 00:10:09.563 "suffix": "-pre", 00:10:09.563 "commit": "c0b2ac5c9" 00:10:09.563 } 00:10:09.563 } 00:10:09.821 11:22:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:09.821 11:22:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:09.821 11:22:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:09.822 11:22:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:09.822 11:22:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:09.822 11:22:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:09.822 11:22:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.822 11:22:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:09.822 11:22:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:09.822 11:22:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:09.822 11:22:17 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:10.080 request: 00:10:10.080 { 00:10:10.080 "method": "env_dpdk_get_mem_stats", 00:10:10.080 "req_id": 1 00:10:10.080 } 00:10:10.080 Got JSON-RPC error response 00:10:10.080 response: 00:10:10.080 { 00:10:10.080 "code": -32601, 00:10:10.080 "message": "Method not found" 00:10:10.080 } 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:10.080 11:22:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59705 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59705 ']' 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59705 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59705 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.080 killing process with pid 59705 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59705' 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@973 -- # kill 59705 00:10:10.080 11:22:17 app_cmdline -- common/autotest_common.sh@978 -- # wait 59705 00:10:12.615 00:10:12.615 real 0m4.363s 00:10:12.615 user 0m4.864s 00:10:12.615 sys 0m0.663s 00:10:12.615 11:22:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.615 ************************************ 00:10:12.615 END TEST app_cmdline 00:10:12.615 ************************************ 00:10:12.615 11:22:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:12.615 11:22:20 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:12.615 11:22:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.615 11:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.615 11:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:12.615 ************************************ 00:10:12.615 START TEST version 00:10:12.615 ************************************ 00:10:12.615 11:22:20 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:12.615 * Looking for test storage... 00:10:12.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:12.615 11:22:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.615 11:22:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.615 11:22:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.615 11:22:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.615 11:22:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.615 11:22:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.615 11:22:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.615 11:22:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.615 11:22:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.615 11:22:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.615 11:22:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.615 11:22:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.615 11:22:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.615 11:22:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.615 11:22:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.615 11:22:20 version -- scripts/common.sh@344 -- # case "$op" in 00:10:12.615 11:22:20 version -- scripts/common.sh@345 -- # : 1 00:10:12.615 11:22:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.615 11:22:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.615 11:22:20 version -- scripts/common.sh@365 -- # decimal 1 00:10:12.615 11:22:20 version -- scripts/common.sh@353 -- # local d=1 00:10:12.615 11:22:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.615 11:22:20 version -- scripts/common.sh@355 -- # echo 1 00:10:12.615 11:22:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.615 11:22:20 version -- scripts/common.sh@366 -- # decimal 2 00:10:12.615 11:22:20 version -- scripts/common.sh@353 -- # local d=2 00:10:12.616 11:22:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.616 11:22:20 version -- scripts/common.sh@355 -- # echo 2 00:10:12.616 11:22:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.616 11:22:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.616 11:22:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.616 11:22:20 version -- scripts/common.sh@368 -- # return 0 00:10:12.616 11:22:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.616 11:22:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.616 --rc genhtml_branch_coverage=1 00:10:12.616 --rc genhtml_function_coverage=1 00:10:12.616 --rc genhtml_legend=1 00:10:12.616 --rc geninfo_all_blocks=1 00:10:12.616 --rc geninfo_unexecuted_blocks=1 00:10:12.616 00:10:12.616 ' 00:10:12.616 11:22:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.616 --rc genhtml_branch_coverage=1 00:10:12.616 --rc genhtml_function_coverage=1 00:10:12.616 --rc genhtml_legend=1 00:10:12.616 --rc geninfo_all_blocks=1 00:10:12.616 --rc geninfo_unexecuted_blocks=1 00:10:12.616 00:10:12.616 ' 00:10:12.616 11:22:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.616 --rc genhtml_branch_coverage=1 00:10:12.616 --rc genhtml_function_coverage=1 00:10:12.616 --rc genhtml_legend=1 00:10:12.616 --rc geninfo_all_blocks=1 00:10:12.616 --rc geninfo_unexecuted_blocks=1 00:10:12.616 00:10:12.616 ' 00:10:12.616 11:22:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.616 --rc genhtml_branch_coverage=1 00:10:12.616 --rc genhtml_function_coverage=1 00:10:12.616 --rc genhtml_legend=1 00:10:12.616 --rc geninfo_all_blocks=1 00:10:12.616 --rc geninfo_unexecuted_blocks=1 00:10:12.616 00:10:12.616 ' 00:10:12.616 11:22:20 version -- app/version.sh@17 -- # get_header_version major 00:10:12.616 11:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:12.616 11:22:20 version -- app/version.sh@17 -- # major=25 00:10:12.616 11:22:20 version -- app/version.sh@18 -- # get_header_version minor 00:10:12.616 11:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:12.616 11:22:20 version -- app/version.sh@18 -- # minor=1 00:10:12.616 11:22:20 version -- app/version.sh@19 -- # get_header_version patch 00:10:12.616 11:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:12.616 11:22:20 version -- app/version.sh@19 -- # patch=0 00:10:12.616 11:22:20 version -- app/version.sh@20 -- # get_header_version suffix 00:10:12.616 11:22:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # cut -f2 00:10:12.616 11:22:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:12.616 11:22:20 version -- app/version.sh@20 -- # suffix=-pre 00:10:12.616 11:22:20 version -- app/version.sh@22 -- # version=25.1 00:10:12.616 11:22:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:12.616 11:22:20 version -- app/version.sh@28 -- # version=25.1rc0 00:10:12.616 11:22:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:12.616 11:22:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:12.616 11:22:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:12.616 11:22:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:12.616 00:10:12.616 real 0m0.269s 00:10:12.616 user 0m0.172s 00:10:12.616 sys 0m0.137s 00:10:12.616 11:22:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.616 ************************************ 00:10:12.616 END TEST version 00:10:12.616 11:22:20 version -- common/autotest_common.sh@10 -- # set +x 00:10:12.616 ************************************ 00:10:12.616 11:22:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:12.616 11:22:20 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:10:12.616 11:22:20 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:12.616 11:22:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.616 11:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.616 11:22:20 -- common/autotest_common.sh@10 -- # set +x 00:10:12.616 ************************************ 00:10:12.616 START TEST bdev_raid 00:10:12.616 ************************************ 00:10:12.616 11:22:20 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:12.616 * Looking for test storage... 00:10:12.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:12.875 11:22:20 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.875 11:22:20 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.875 11:22:20 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.875 11:22:20 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@345 -- # : 1 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:10:12.875 11:22:20 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.876 11:22:20 bdev_raid -- scripts/common.sh@368 -- # return 0 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.876 --rc genhtml_branch_coverage=1 00:10:12.876 --rc genhtml_function_coverage=1 00:10:12.876 --rc genhtml_legend=1 00:10:12.876 --rc geninfo_all_blocks=1 00:10:12.876 --rc geninfo_unexecuted_blocks=1 00:10:12.876 00:10:12.876 ' 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.876 --rc genhtml_branch_coverage=1 00:10:12.876 --rc genhtml_function_coverage=1 00:10:12.876 --rc genhtml_legend=1 00:10:12.876 --rc geninfo_all_blocks=1 00:10:12.876 --rc geninfo_unexecuted_blocks=1 00:10:12.876 00:10:12.876 ' 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.876 --rc genhtml_branch_coverage=1 00:10:12.876 --rc genhtml_function_coverage=1 00:10:12.876 --rc genhtml_legend=1 00:10:12.876 --rc geninfo_all_blocks=1 00:10:12.876 --rc geninfo_unexecuted_blocks=1 00:10:12.876 00:10:12.876 ' 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.876 --rc genhtml_branch_coverage=1 00:10:12.876 --rc genhtml_function_coverage=1 00:10:12.876 --rc genhtml_legend=1 00:10:12.876 --rc geninfo_all_blocks=1 00:10:12.876 --rc geninfo_unexecuted_blocks=1 00:10:12.876 00:10:12.876 ' 00:10:12.876 11:22:20 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:12.876 11:22:20 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:10:12.876 11:22:20 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:10:12.876 11:22:20 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:10:12.876 11:22:20 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:10:12.876 11:22:20 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:10:12.876 11:22:20 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.876 11:22:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.876 ************************************ 00:10:12.876 START TEST raid1_resize_data_offset_test 00:10:12.876 ************************************ 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59897 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59897' 00:10:12.876 Process raid pid: 59897 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59897 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59897 ']' 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.876 11:22:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.876 [2024-11-20 11:22:20.701767] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:12.876 [2024-11-20 11:22:20.701967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.135 [2024-11-20 11:22:20.890380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.394 [2024-11-20 11:22:21.052790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.653 [2024-11-20 11:22:21.269594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.653 [2024-11-20 11:22:21.269660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.911 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.911 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:10:13.912 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:10:13.912 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.912 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.173 malloc0 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.173 malloc1 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.173 null0 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.173 [2024-11-20 11:22:21.925931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:10:14.173 [2024-11-20 11:22:21.929082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:14.173 [2024-11-20 11:22:21.929177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:10:14.173 [2024-11-20 11:22:21.929460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:14.173 [2024-11-20 11:22:21.929500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:10:14.173 [2024-11-20 11:22:21.929987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:14.173 [2024-11-20 11:22:21.930272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:14.173 [2024-11-20 11:22:21.930310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:14.173 [2024-11-20 11:22:21.930595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.173 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.174 [2024-11-20 11:22:21.990715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:10:14.174 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.174 11:22:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:10:14.174 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.174 11:22:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.748 malloc2 00:10:14.748 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.748 11:22:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:10:14.748 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.748 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.748 [2024-11-20 11:22:22.571357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:14.748 [2024-11-20 11:22:22.588544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:14.748 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.748 [2024-11-20 11:22:22.591128] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:10:14.748 11:22:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:14.748 11:22:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59897 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59897 ']' 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59897 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59897 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.007 killing process with pid 59897 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59897' 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59897 00:10:15.007 11:22:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59897 00:10:15.007 [2024-11-20 11:22:22.675802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.007 [2024-11-20 11:22:22.676595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:10:15.007 [2024-11-20 11:22:22.676684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.007 [2024-11-20 11:22:22.676718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:10:15.007 [2024-11-20 11:22:22.709489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.007 [2024-11-20 11:22:22.709964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.007 [2024-11-20 11:22:22.710002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:16.912 [2024-11-20 11:22:24.387527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.850 11:22:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:10:17.850 00:10:17.850 real 0m4.866s 00:10:17.850 user 0m4.815s 00:10:17.850 sys 0m0.665s 00:10:17.850 11:22:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.850 11:22:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.850 ************************************ 00:10:17.850 END TEST raid1_resize_data_offset_test 00:10:17.850 ************************************ 00:10:17.850 11:22:25 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:10:17.850 11:22:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.850 11:22:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.850 11:22:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.850 ************************************ 00:10:17.850 START TEST raid0_resize_superblock_test 00:10:17.850 ************************************ 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59982 00:10:17.850 Process raid pid: 59982 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59982' 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59982 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59982 ']' 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.850 11:22:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.850 [2024-11-20 11:22:25.627460] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:17.850 [2024-11-20 11:22:25.627675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.109 [2024-11-20 11:22:25.825908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.365 [2024-11-20 11:22:25.985711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.365 [2024-11-20 11:22:26.206592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.365 [2024-11-20 11:22:26.206675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.935 11:22:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.935 11:22:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:18.935 11:22:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:18.935 11:22:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.935 11:22:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.502 malloc0 00:10:19.502 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.502 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:19.502 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.502 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.502 [2024-11-20 11:22:27.232281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:19.502 [2024-11-20 11:22:27.232511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.502 [2024-11-20 11:22:27.232589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:19.502 [2024-11-20 11:22:27.232873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.503 [2024-11-20 11:22:27.235843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.503 [2024-11-20 11:22:27.236012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:19.503 pt0 00:10:19.503 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.503 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:19.503 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.503 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.761 77a32efe-289e-4db9-a6e8-c145d6fe56ec 00:10:19.761 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.761 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:19.761 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.761 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.761 14c213ce-9920-4d7b-839d-6b2d84e08202 00:10:19.761 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 4a6e76cb-e810-46c6-b58a-e1d345d8e5c8 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 [2024-11-20 11:22:27.382789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 14c213ce-9920-4d7b-839d-6b2d84e08202 is claimed 00:10:19.762 [2024-11-20 11:22:27.382911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a6e76cb-e810-46c6-b58a-e1d345d8e5c8 is claimed 00:10:19.762 [2024-11-20 11:22:27.383119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:19.762 [2024-11-20 11:22:27.383148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:10:19.762 [2024-11-20 11:22:27.383490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:19.762 [2024-11-20 11:22:27.383766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:19.762 [2024-11-20 11:22:27.383783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:19.762 [2024-11-20 11:22:27.383970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 [2024-11-20 11:22:27.507253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 [2024-11-20 11:22:27.555159] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:19.762 [2024-11-20 11:22:27.555323] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '14c213ce-9920-4d7b-839d-6b2d84e08202' was resized: old size 131072, new size 204800 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 [2024-11-20 11:22:27.562953] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:19.762 [2024-11-20 11:22:27.563091] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4a6e76cb-e810-46c6-b58a-e1d345d8e5c8' was resized: old size 131072, new size 204800 00:10:19.762 [2024-11-20 11:22:27.563277] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.762 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:10:20.021 [2024-11-20 11:22:27.679214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.021 [2024-11-20 11:22:27.730951] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:20.021 [2024-11-20 11:22:27.731209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:20.021 [2024-11-20 11:22:27.731279] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.021 [2024-11-20 11:22:27.731497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:20.021 [2024-11-20 11:22:27.731668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.021 [2024-11-20 11:22:27.731724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.021 [2024-11-20 11:22:27.731744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.021 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.022 [2024-11-20 11:22:27.738824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:20.022 [2024-11-20 11:22:27.739003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.022 [2024-11-20 11:22:27.739073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:20.022 [2024-11-20 11:22:27.739206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.022 [2024-11-20 11:22:27.742347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.022 [2024-11-20 11:22:27.742509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:20.022 pt0 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.022 [2024-11-20 11:22:27.745010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 14c213ce-9920-4d7b-839d-6b2d84e08202 00:10:20.022 [2024-11-20 11:22:27.745156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 14c213ce-9920-4d7b-839d-6b2d84e08202 is claimed 00:10:20.022 [2024-11-20 11:22:27.745316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4a6e76cb-e810-46c6-b58a-e1d345d8e5c8 00:10:20.022 [2024-11-20 11:22:27.745351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a6e76cb-e810-46c6-b58a-e1d345d8e5c8 is claimed 00:10:20.022 [2024-11-20 11:22:27.745507] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4a6e76cb-e810-46c6-b58a-e1d345d8e5c8 (2) smaller than existing raid bdev Raid (3) 00:10:20.022 [2024-11-20 11:22:27.745542] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 14c213ce-9920-4d7b-839d-6b2d84e08202: File exists 00:10:20.022 [2024-11-20 11:22:27.745600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:20.022 [2024-11-20 11:22:27.745649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:10:20.022 [2024-11-20 11:22:27.745994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:20.022 [2024-11-20 11:22:27.746191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:20.022 [2024-11-20 11:22:27.746212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:20.022 [2024-11-20 11:22:27.746401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.022 [2024-11-20 11:22:27.759247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59982 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59982 ']' 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59982 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59982 00:10:20.022 killing process with pid 59982 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59982' 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59982 00:10:20.022 [2024-11-20 11:22:27.840534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.022 11:22:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59982 00:10:20.022 [2024-11-20 11:22:27.840711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.022 [2024-11-20 11:22:27.840777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.022 [2024-11-20 11:22:27.840801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:21.400 [2024-11-20 11:22:29.183011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.777 11:22:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:22.777 00:10:22.777 real 0m4.739s 00:10:22.777 user 0m5.095s 00:10:22.777 sys 0m0.661s 00:10:22.777 11:22:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.777 ************************************ 00:10:22.777 END TEST raid0_resize_superblock_test 00:10:22.777 ************************************ 00:10:22.777 11:22:30 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.777 11:22:30 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:10:22.777 11:22:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.777 11:22:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.777 11:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.777 ************************************ 00:10:22.777 START TEST raid1_resize_superblock_test 00:10:22.777 ************************************ 00:10:22.777 Process raid pid: 60080 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60080 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60080' 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60080 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60080 ']' 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.777 11:22:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.777 [2024-11-20 11:22:30.428663] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:22.777 [2024-11-20 11:22:30.429032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.777 [2024-11-20 11:22:30.607918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.036 [2024-11-20 11:22:30.756605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.295 [2024-11-20 11:22:30.966039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.295 [2024-11-20 11:22:30.966096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.888 11:22:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.888 11:22:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:23.888 11:22:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:23.888 11:22:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.888 11:22:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.454 malloc0 00:10:24.454 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.454 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:24.454 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.454 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.454 [2024-11-20 11:22:32.043389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:24.454 [2024-11-20 11:22:32.043652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.454 [2024-11-20 11:22:32.043741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:24.454 [2024-11-20 11:22:32.043995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.454 [2024-11-20 11:22:32.046893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.455 pt0 00:10:24.455 [2024-11-20 11:22:32.047068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 c944240a-b7f0-4ef3-b38e-bfe4a92598e8 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 4f21c081-fb88-4ad1-a925-fc0e11708882 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 e1913523-9d43-40c6-ab64-5971a3fe1488 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 [2024-11-20 11:22:32.187421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f21c081-fb88-4ad1-a925-fc0e11708882 is claimed 00:10:24.455 [2024-11-20 11:22:32.187755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1913523-9d43-40c6-ab64-5971a3fe1488 is claimed 00:10:24.455 [2024-11-20 11:22:32.188154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:24.455 [2024-11-20 11:22:32.188184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:10:24.455 [2024-11-20 11:22:32.188546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:24.455 [2024-11-20 11:22:32.188862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:24.455 [2024-11-20 11:22:32.188879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:24.455 [2024-11-20 11:22:32.189144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.455 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.714 [2024-11-20 11:22:32.303743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.714 [2024-11-20 11:22:32.355793] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:24.714 [2024-11-20 11:22:32.355954] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4f21c081-fb88-4ad1-a925-fc0e11708882' was resized: old size 131072, new size 204800 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.714 [2024-11-20 11:22:32.363583] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:24.714 [2024-11-20 11:22:32.363756] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e1913523-9d43-40c6-ab64-5971a3fe1488' was resized: old size 131072, new size 204800 00:10:24.714 [2024-11-20 11:22:32.363954] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.714 [2024-11-20 11:22:32.483794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.714 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.974 [2024-11-20 11:22:32.559539] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:24.974 [2024-11-20 11:22:32.559820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:24.974 [2024-11-20 11:22:32.559979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:24.974 [2024-11-20 11:22:32.560237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.974 [2024-11-20 11:22:32.560699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.974 [2024-11-20 11:22:32.560945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.974 [2024-11-20 11:22:32.560982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.974 [2024-11-20 11:22:32.567432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:24.974 [2024-11-20 11:22:32.567651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.974 [2024-11-20 11:22:32.567726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:24.974 [2024-11-20 11:22:32.567957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.974 [2024-11-20 11:22:32.571004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.974 [2024-11-20 11:22:32.571056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:24.974 pt0 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.974 [2024-11-20 11:22:32.573411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4f21c081-fb88-4ad1-a925-fc0e11708882 00:10:24.974 [2024-11-20 11:22:32.573495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f21c081-fb88-4ad1-a925-fc0e11708882 is claimed 00:10:24.974 [2024-11-20 11:22:32.573675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e1913523-9d43-40c6-ab64-5971a3fe1488 00:10:24.974 [2024-11-20 11:22:32.573714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e1913523-9d43-40c6-ab64-5971a3fe1488 is claimed 00:10:24.974 [2024-11-20 11:22:32.573885] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e1913523-9d43-40c6-ab64-5971a3fe1488 (2) smaller than existing raid bdev Raid (3) 00:10:24.974 [2024-11-20 11:22:32.573916] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4f21c081-fb88-4ad1-a925-fc0e11708882: File exists 00:10:24.974 [2024-11-20 11:22:32.573973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:24.974 [2024-11-20 11:22:32.574049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:24.974 [2024-11-20 11:22:32.574391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:24.974 [2024-11-20 11:22:32.574768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:24.974 [2024-11-20 11:22:32.574896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:24.974 [2024-11-20 11:22:32.575294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:10:24.974 [2024-11-20 11:22:32.587752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60080 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60080 ']' 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60080 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60080 00:10:24.974 killing process with pid 60080 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60080' 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60080 00:10:24.974 [2024-11-20 11:22:32.665688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.974 [2024-11-20 11:22:32.665822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.974 11:22:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60080 00:10:24.974 [2024-11-20 11:22:32.665897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.974 [2024-11-20 11:22:32.665911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:26.353 [2024-11-20 11:22:34.018213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.288 ************************************ 00:10:27.288 END TEST raid1_resize_superblock_test 00:10:27.288 ************************************ 00:10:27.288 11:22:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:27.288 00:10:27.288 real 0m4.778s 00:10:27.288 user 0m5.172s 00:10:27.289 sys 0m0.650s 00:10:27.289 11:22:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.289 11:22:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.548 11:22:35 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:10:27.548 11:22:35 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:10:27.548 11:22:35 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:10:27.548 11:22:35 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:10:27.548 11:22:35 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:10:27.548 11:22:35 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:10:27.548 11:22:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.548 11:22:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.548 11:22:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.548 ************************************ 00:10:27.548 START TEST raid_function_test_raid0 00:10:27.548 ************************************ 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:10:27.548 Process raid pid: 60183 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60183 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60183' 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60183 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60183 ']' 00:10:27.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.548 11:22:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:27.548 [2024-11-20 11:22:35.263564] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:27.548 [2024-11-20 11:22:35.264090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.807 [2024-11-20 11:22:35.456786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.807 [2024-11-20 11:22:35.618868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.066 [2024-11-20 11:22:35.859954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.066 [2024-11-20 11:22:35.860025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:28.634 Base_1 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:28.634 Base_2 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:28.634 [2024-11-20 11:22:36.438864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:28.634 [2024-11-20 11:22:36.441727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:28.634 [2024-11-20 11:22:36.442038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:28.634 [2024-11-20 11:22:36.442069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:28.634 [2024-11-20 11:22:36.442409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:28.634 [2024-11-20 11:22:36.442599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:28.634 [2024-11-20 11:22:36.442638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:28.634 [2024-11-20 11:22:36.442890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.634 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:28.893 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:29.152 [2024-11-20 11:22:36.795126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:29.152 /dev/nbd0 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.152 1+0 records in 00:10:29.152 1+0 records out 00:10:29.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381147 s, 10.7 MB/s 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:29.152 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:29.153 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:29.153 11:22:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:29.412 { 00:10:29.412 "nbd_device": "/dev/nbd0", 00:10:29.412 "bdev_name": "raid" 00:10:29.412 } 00:10:29.412 ]' 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:29.412 { 00:10:29.412 "nbd_device": "/dev/nbd0", 00:10:29.412 "bdev_name": "raid" 00:10:29.412 } 00:10:29.412 ]' 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:29.412 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:29.671 4096+0 records in 00:10:29.671 4096+0 records out 00:10:29.671 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0328658 s, 63.8 MB/s 00:10:29.671 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:29.930 4096+0 records in 00:10:29.930 4096+0 records out 00:10:29.930 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.329717 s, 6.4 MB/s 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:29.930 128+0 records in 00:10:29.930 128+0 records out 00:10:29.930 65536 bytes (66 kB, 64 KiB) copied, 0.00136281 s, 48.1 MB/s 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:29.930 2035+0 records in 00:10:29.930 2035+0 records out 00:10:29.930 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00994622 s, 105 MB/s 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:29.930 456+0 records in 00:10:29.930 456+0 records out 00:10:29.930 233472 bytes (233 kB, 228 KiB) copied, 0.00266974 s, 87.5 MB/s 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.930 11:22:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:30.498 [2024-11-20 11:22:38.035934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:30.498 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60183 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60183 ']' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60183 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60183 00:10:30.758 killing process with pid 60183 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60183' 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60183 00:10:30.758 [2024-11-20 11:22:38.441963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.758 11:22:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60183 00:10:30.758 [2024-11-20 11:22:38.442097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.758 [2024-11-20 11:22:38.442161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.758 [2024-11-20 11:22:38.442184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:31.017 [2024-11-20 11:22:38.626617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.953 11:22:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:10:31.953 00:10:31.953 real 0m4.495s 00:10:31.953 user 0m5.629s 00:10:31.953 sys 0m1.070s 00:10:31.953 11:22:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.953 11:22:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 ************************************ 00:10:31.953 END TEST raid_function_test_raid0 00:10:31.953 ************************************ 00:10:31.953 11:22:39 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:10:31.953 11:22:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.953 11:22:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.953 11:22:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 ************************************ 00:10:31.953 START TEST raid_function_test_concat 00:10:31.953 ************************************ 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:31.953 Process raid pid: 60317 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60317 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60317' 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60317 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60317 ']' 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 11:22:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.212 [2024-11-20 11:22:39.811761] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:32.212 [2024-11-20 11:22:39.811924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.212 [2024-11-20 11:22:39.999123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.473 [2024-11-20 11:22:40.155216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.732 [2024-11-20 11:22:40.358522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.732 [2024-11-20 11:22:40.358870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:32.991 Base_1 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.991 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:33.250 Base_2 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:33.250 [2024-11-20 11:22:40.864967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:33.250 [2024-11-20 11:22:40.867575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:33.250 [2024-11-20 11:22:40.867863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:33.250 [2024-11-20 11:22:40.867893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:33.250 [2024-11-20 11:22:40.868235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:33.250 [2024-11-20 11:22:40.868444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:33.250 [2024-11-20 11:22:40.868460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:33.250 [2024-11-20 11:22:40.868648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:33.250 11:22:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:33.509 [2024-11-20 11:22:41.209099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:33.509 /dev/nbd0 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.509 1+0 records in 00:10:33.509 1+0 records out 00:10:33.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025835 s, 15.9 MB/s 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:33.509 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:33.768 { 00:10:33.768 "nbd_device": "/dev/nbd0", 00:10:33.768 "bdev_name": "raid" 00:10:33.768 } 00:10:33.768 ]' 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:33.768 { 00:10:33.768 "nbd_device": "/dev/nbd0", 00:10:33.768 "bdev_name": "raid" 00:10:33.768 } 00:10:33.768 ]' 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:33.768 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:34.027 4096+0 records in 00:10:34.027 4096+0 records out 00:10:34.027 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0336816 s, 62.3 MB/s 00:10:34.027 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:34.286 4096+0 records in 00:10:34.286 4096+0 records out 00:10:34.286 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.321733 s, 6.5 MB/s 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:34.286 128+0 records in 00:10:34.286 128+0 records out 00:10:34.286 65536 bytes (66 kB, 64 KiB) copied, 0.00100571 s, 65.2 MB/s 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:34.286 2035+0 records in 00:10:34.286 2035+0 records out 00:10:34.286 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00987552 s, 106 MB/s 00:10:34.286 11:22:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:34.286 456+0 records in 00:10:34.286 456+0 records out 00:10:34.286 233472 bytes (233 kB, 228 KiB) copied, 0.00263005 s, 88.8 MB/s 00:10:34.286 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.287 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.545 [2024-11-20 11:22:42.350221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:34.545 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60317 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60317 ']' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60317 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60317 00:10:35.113 killing process with pid 60317 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60317' 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60317 00:10:35.113 [2024-11-20 11:22:42.745875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.113 11:22:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60317 00:10:35.113 [2024-11-20 11:22:42.745994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.113 [2024-11-20 11:22:42.746076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.113 [2024-11-20 11:22:42.746096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:35.372 [2024-11-20 11:22:42.969681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.309 ************************************ 00:10:36.309 END TEST raid_function_test_concat 00:10:36.309 ************************************ 00:10:36.309 11:22:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:10:36.309 00:10:36.309 real 0m4.324s 00:10:36.309 user 0m5.271s 00:10:36.309 sys 0m1.019s 00:10:36.309 11:22:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.309 11:22:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:36.309 11:22:44 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:10:36.309 11:22:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.309 11:22:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.309 11:22:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.309 ************************************ 00:10:36.309 START TEST raid0_resize_test 00:10:36.309 ************************************ 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:36.309 Process raid pid: 60451 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60451 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60451' 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60451 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60451 ']' 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.309 11:22:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.568 [2024-11-20 11:22:44.195439] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:36.568 [2024-11-20 11:22:44.195635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.568 [2024-11-20 11:22:44.384122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.828 [2024-11-20 11:22:44.520130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.087 [2024-11-20 11:22:44.726386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.087 [2024-11-20 11:22:44.726433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.346 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.346 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:37.346 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:37.346 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.346 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.604 Base_1 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.604 Base_2 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.604 [2024-11-20 11:22:45.208068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:37.604 [2024-11-20 11:22:45.210869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:37.604 [2024-11-20 11:22:45.211150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:37.604 [2024-11-20 11:22:45.211193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:37.604 [2024-11-20 11:22:45.211559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:37.604 [2024-11-20 11:22:45.211830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:37.604 [2024-11-20 11:22:45.211846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:37.604 [2024-11-20 11:22:45.212104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.604 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.605 [2024-11-20 11:22:45.216142] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:37.605 [2024-11-20 11:22:45.216174] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:37.605 true 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.605 [2024-11-20 11:22:45.228388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.605 [2024-11-20 11:22:45.284211] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:37.605 [2024-11-20 11:22:45.284373] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:37.605 [2024-11-20 11:22:45.284539] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:37.605 true 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.605 [2024-11-20 11:22:45.296402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60451 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60451 ']' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60451 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60451 00:10:37.605 killing process with pid 60451 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60451' 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60451 00:10:37.605 [2024-11-20 11:22:45.367067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.605 11:22:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60451 00:10:37.605 [2024-11-20 11:22:45.367183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.605 [2024-11-20 11:22:45.367246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.605 [2024-11-20 11:22:45.367259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:37.605 [2024-11-20 11:22:45.382691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.577 11:22:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:38.577 00:10:38.577 real 0m2.311s 00:10:38.577 user 0m2.553s 00:10:38.577 sys 0m0.393s 00:10:38.577 11:22:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.577 11:22:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.577 ************************************ 00:10:38.577 END TEST raid0_resize_test 00:10:38.577 ************************************ 00:10:38.836 11:22:46 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:38.836 11:22:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.836 11:22:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.836 11:22:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.836 ************************************ 00:10:38.836 START TEST raid1_resize_test 00:10:38.836 ************************************ 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:38.836 Process raid pid: 60507 00:10:38.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60507 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60507' 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60507 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60507 ']' 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.836 11:22:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.837 11:22:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.837 11:22:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.837 [2024-11-20 11:22:46.566213] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:38.837 [2024-11-20 11:22:46.566882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.096 [2024-11-20 11:22:46.756175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.096 [2024-11-20 11:22:46.886240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.355 [2024-11-20 11:22:47.090391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.355 [2024-11-20 11:22:47.090690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.924 Base_1 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.924 Base_2 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.924 [2024-11-20 11:22:47.593399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:39.924 [2024-11-20 11:22:47.596064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:39.924 [2024-11-20 11:22:47.596264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:39.924 [2024-11-20 11:22:47.596444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:39.924 [2024-11-20 11:22:47.596809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:39.924 [2024-11-20 11:22:47.596974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:39.924 [2024-11-20 11:22:47.596991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:39.924 [2024-11-20 11:22:47.597176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.924 [2024-11-20 11:22:47.601370] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:39.924 [2024-11-20 11:22:47.601406] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:39.924 true 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.924 [2024-11-20 11:22:47.613559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.924 [2024-11-20 11:22:47.665371] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:39.924 [2024-11-20 11:22:47.665575] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:39.924 [2024-11-20 11:22:47.665773] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:39.924 true 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.924 [2024-11-20 11:22:47.677588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60507 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60507 ']' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60507 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.924 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60507 00:10:39.924 killing process with pid 60507 00:10:39.925 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.925 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.925 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60507' 00:10:39.925 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60507 00:10:39.925 [2024-11-20 11:22:47.756370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.925 11:22:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60507 00:10:39.925 [2024-11-20 11:22:47.756467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.925 [2024-11-20 11:22:47.757072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.925 [2024-11-20 11:22:47.757105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:40.183 [2024-11-20 11:22:47.771914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.120 11:22:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:41.120 ************************************ 00:10:41.120 END TEST raid1_resize_test 00:10:41.120 ************************************ 00:10:41.120 00:10:41.120 real 0m2.367s 00:10:41.120 user 0m2.629s 00:10:41.120 sys 0m0.396s 00:10:41.120 11:22:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.120 11:22:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.120 11:22:48 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:41.121 11:22:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:41.121 11:22:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:41.121 11:22:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:41.121 11:22:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.121 11:22:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.121 ************************************ 00:10:41.121 START TEST raid_state_function_test 00:10:41.121 ************************************ 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60564 00:10:41.121 Process raid pid: 60564 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60564' 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60564 00:10:41.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60564 ']' 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.121 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.380 [2024-11-20 11:22:49.020427] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:41.380 [2024-11-20 11:22:49.020935] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.380 [2024-11-20 11:22:49.224328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.639 [2024-11-20 11:22:49.383450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.898 [2024-11-20 11:22:49.631520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.898 [2024-11-20 11:22:49.631577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.157 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.157 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:42.157 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:42.157 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.157 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.157 [2024-11-20 11:22:49.994269] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.157 [2024-11-20 11:22:49.994338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.157 [2024-11-20 11:22:49.994361] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.157 [2024-11-20 11:22:49.994390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.415 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.415 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.415 "name": "Existed_Raid", 00:10:42.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.415 "strip_size_kb": 64, 00:10:42.415 "state": "configuring", 00:10:42.415 "raid_level": "raid0", 00:10:42.415 "superblock": false, 00:10:42.415 "num_base_bdevs": 2, 00:10:42.416 "num_base_bdevs_discovered": 0, 00:10:42.416 "num_base_bdevs_operational": 2, 00:10:42.416 "base_bdevs_list": [ 00:10:42.416 { 00:10:42.416 "name": "BaseBdev1", 00:10:42.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.416 "is_configured": false, 00:10:42.416 "data_offset": 0, 00:10:42.416 "data_size": 0 00:10:42.416 }, 00:10:42.416 { 00:10:42.416 "name": "BaseBdev2", 00:10:42.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.416 "is_configured": false, 00:10:42.416 "data_offset": 0, 00:10:42.416 "data_size": 0 00:10:42.416 } 00:10:42.416 ] 00:10:42.416 }' 00:10:42.416 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.416 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-20 11:22:50.526341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.983 [2024-11-20 11:22:50.526399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-20 11:22:50.538424] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.983 [2024-11-20 11:22:50.538639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.983 [2024-11-20 11:22:50.538780] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:42.983 [2024-11-20 11:22:50.538846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.983 [2024-11-20 11:22:50.586890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.983 BaseBdev1 00:10:42.983 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.984 [ 00:10:42.984 { 00:10:42.984 "name": "BaseBdev1", 00:10:42.984 "aliases": [ 00:10:42.984 "17f04a3a-8491-410b-86be-a6f6f1451fd0" 00:10:42.984 ], 00:10:42.984 "product_name": "Malloc disk", 00:10:42.984 "block_size": 512, 00:10:42.984 "num_blocks": 65536, 00:10:42.984 "uuid": "17f04a3a-8491-410b-86be-a6f6f1451fd0", 00:10:42.984 "assigned_rate_limits": { 00:10:42.984 "rw_ios_per_sec": 0, 00:10:42.984 "rw_mbytes_per_sec": 0, 00:10:42.984 "r_mbytes_per_sec": 0, 00:10:42.984 "w_mbytes_per_sec": 0 00:10:42.984 }, 00:10:42.984 "claimed": true, 00:10:42.984 "claim_type": "exclusive_write", 00:10:42.984 "zoned": false, 00:10:42.984 "supported_io_types": { 00:10:42.984 "read": true, 00:10:42.984 "write": true, 00:10:42.984 "unmap": true, 00:10:42.984 "flush": true, 00:10:42.984 "reset": true, 00:10:42.984 "nvme_admin": false, 00:10:42.984 "nvme_io": false, 00:10:42.984 "nvme_io_md": false, 00:10:42.984 "write_zeroes": true, 00:10:42.984 "zcopy": true, 00:10:42.984 "get_zone_info": false, 00:10:42.984 "zone_management": false, 00:10:42.984 "zone_append": false, 00:10:42.984 "compare": false, 00:10:42.984 "compare_and_write": false, 00:10:42.984 "abort": true, 00:10:42.984 "seek_hole": false, 00:10:42.984 "seek_data": false, 00:10:42.984 "copy": true, 00:10:42.984 "nvme_iov_md": false 00:10:42.984 }, 00:10:42.984 "memory_domains": [ 00:10:42.984 { 00:10:42.984 "dma_device_id": "system", 00:10:42.984 "dma_device_type": 1 00:10:42.984 }, 00:10:42.984 { 00:10:42.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.984 "dma_device_type": 2 00:10:42.984 } 00:10:42.984 ], 00:10:42.984 "driver_specific": {} 00:10:42.984 } 00:10:42.984 ] 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.984 "name": "Existed_Raid", 00:10:42.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.984 "strip_size_kb": 64, 00:10:42.984 "state": "configuring", 00:10:42.984 "raid_level": "raid0", 00:10:42.984 "superblock": false, 00:10:42.984 "num_base_bdevs": 2, 00:10:42.984 "num_base_bdevs_discovered": 1, 00:10:42.984 "num_base_bdevs_operational": 2, 00:10:42.984 "base_bdevs_list": [ 00:10:42.984 { 00:10:42.984 "name": "BaseBdev1", 00:10:42.984 "uuid": "17f04a3a-8491-410b-86be-a6f6f1451fd0", 00:10:42.984 "is_configured": true, 00:10:42.984 "data_offset": 0, 00:10:42.984 "data_size": 65536 00:10:42.984 }, 00:10:42.984 { 00:10:42.984 "name": "BaseBdev2", 00:10:42.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.984 "is_configured": false, 00:10:42.984 "data_offset": 0, 00:10:42.984 "data_size": 0 00:10:42.984 } 00:10:42.984 ] 00:10:42.984 }' 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.984 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.552 [2024-11-20 11:22:51.143170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.552 [2024-11-20 11:22:51.143406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.552 [2024-11-20 11:22:51.151231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.552 [2024-11-20 11:22:51.153604] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.552 [2024-11-20 11:22:51.153702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.552 "name": "Existed_Raid", 00:10:43.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.552 "strip_size_kb": 64, 00:10:43.552 "state": "configuring", 00:10:43.552 "raid_level": "raid0", 00:10:43.552 "superblock": false, 00:10:43.552 "num_base_bdevs": 2, 00:10:43.552 "num_base_bdevs_discovered": 1, 00:10:43.552 "num_base_bdevs_operational": 2, 00:10:43.552 "base_bdevs_list": [ 00:10:43.552 { 00:10:43.552 "name": "BaseBdev1", 00:10:43.552 "uuid": "17f04a3a-8491-410b-86be-a6f6f1451fd0", 00:10:43.552 "is_configured": true, 00:10:43.552 "data_offset": 0, 00:10:43.552 "data_size": 65536 00:10:43.552 }, 00:10:43.552 { 00:10:43.552 "name": "BaseBdev2", 00:10:43.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.552 "is_configured": false, 00:10:43.552 "data_offset": 0, 00:10:43.552 "data_size": 0 00:10:43.552 } 00:10:43.552 ] 00:10:43.552 }' 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.552 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.120 [2024-11-20 11:22:51.729032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.120 [2024-11-20 11:22:51.729402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.120 [2024-11-20 11:22:51.729457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:44.120 [2024-11-20 11:22:51.729948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:44.120 [2024-11-20 11:22:51.730314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.120 [2024-11-20 11:22:51.730345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:44.120 [2024-11-20 11:22:51.730676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.120 BaseBdev2 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.120 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.121 [ 00:10:44.121 { 00:10:44.121 "name": "BaseBdev2", 00:10:44.121 "aliases": [ 00:10:44.121 "81ace9ea-d8e7-43e0-abc1-0fd6abf2f22d" 00:10:44.121 ], 00:10:44.121 "product_name": "Malloc disk", 00:10:44.121 "block_size": 512, 00:10:44.121 "num_blocks": 65536, 00:10:44.121 "uuid": "81ace9ea-d8e7-43e0-abc1-0fd6abf2f22d", 00:10:44.121 "assigned_rate_limits": { 00:10:44.121 "rw_ios_per_sec": 0, 00:10:44.121 "rw_mbytes_per_sec": 0, 00:10:44.121 "r_mbytes_per_sec": 0, 00:10:44.121 "w_mbytes_per_sec": 0 00:10:44.121 }, 00:10:44.121 "claimed": true, 00:10:44.121 "claim_type": "exclusive_write", 00:10:44.121 "zoned": false, 00:10:44.121 "supported_io_types": { 00:10:44.121 "read": true, 00:10:44.121 "write": true, 00:10:44.121 "unmap": true, 00:10:44.121 "flush": true, 00:10:44.121 "reset": true, 00:10:44.121 "nvme_admin": false, 00:10:44.121 "nvme_io": false, 00:10:44.121 "nvme_io_md": false, 00:10:44.121 "write_zeroes": true, 00:10:44.121 "zcopy": true, 00:10:44.121 "get_zone_info": false, 00:10:44.121 "zone_management": false, 00:10:44.121 "zone_append": false, 00:10:44.121 "compare": false, 00:10:44.121 "compare_and_write": false, 00:10:44.121 "abort": true, 00:10:44.121 "seek_hole": false, 00:10:44.121 "seek_data": false, 00:10:44.121 "copy": true, 00:10:44.121 "nvme_iov_md": false 00:10:44.121 }, 00:10:44.121 "memory_domains": [ 00:10:44.121 { 00:10:44.121 "dma_device_id": "system", 00:10:44.121 "dma_device_type": 1 00:10:44.121 }, 00:10:44.121 { 00:10:44.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.121 "dma_device_type": 2 00:10:44.121 } 00:10:44.121 ], 00:10:44.121 "driver_specific": {} 00:10:44.121 } 00:10:44.121 ] 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.121 "name": "Existed_Raid", 00:10:44.121 "uuid": "b51530de-1249-4520-9eb3-832398584181", 00:10:44.121 "strip_size_kb": 64, 00:10:44.121 "state": "online", 00:10:44.121 "raid_level": "raid0", 00:10:44.121 "superblock": false, 00:10:44.121 "num_base_bdevs": 2, 00:10:44.121 "num_base_bdevs_discovered": 2, 00:10:44.121 "num_base_bdevs_operational": 2, 00:10:44.121 "base_bdevs_list": [ 00:10:44.121 { 00:10:44.121 "name": "BaseBdev1", 00:10:44.121 "uuid": "17f04a3a-8491-410b-86be-a6f6f1451fd0", 00:10:44.121 "is_configured": true, 00:10:44.121 "data_offset": 0, 00:10:44.121 "data_size": 65536 00:10:44.121 }, 00:10:44.121 { 00:10:44.121 "name": "BaseBdev2", 00:10:44.121 "uuid": "81ace9ea-d8e7-43e0-abc1-0fd6abf2f22d", 00:10:44.121 "is_configured": true, 00:10:44.121 "data_offset": 0, 00:10:44.121 "data_size": 65536 00:10:44.121 } 00:10:44.121 ] 00:10:44.121 }' 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.121 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.688 [2024-11-20 11:22:52.289721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.688 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.688 "name": "Existed_Raid", 00:10:44.688 "aliases": [ 00:10:44.688 "b51530de-1249-4520-9eb3-832398584181" 00:10:44.688 ], 00:10:44.688 "product_name": "Raid Volume", 00:10:44.688 "block_size": 512, 00:10:44.688 "num_blocks": 131072, 00:10:44.688 "uuid": "b51530de-1249-4520-9eb3-832398584181", 00:10:44.688 "assigned_rate_limits": { 00:10:44.688 "rw_ios_per_sec": 0, 00:10:44.688 "rw_mbytes_per_sec": 0, 00:10:44.688 "r_mbytes_per_sec": 0, 00:10:44.688 "w_mbytes_per_sec": 0 00:10:44.688 }, 00:10:44.688 "claimed": false, 00:10:44.688 "zoned": false, 00:10:44.689 "supported_io_types": { 00:10:44.689 "read": true, 00:10:44.689 "write": true, 00:10:44.689 "unmap": true, 00:10:44.689 "flush": true, 00:10:44.689 "reset": true, 00:10:44.689 "nvme_admin": false, 00:10:44.689 "nvme_io": false, 00:10:44.689 "nvme_io_md": false, 00:10:44.689 "write_zeroes": true, 00:10:44.689 "zcopy": false, 00:10:44.689 "get_zone_info": false, 00:10:44.689 "zone_management": false, 00:10:44.689 "zone_append": false, 00:10:44.689 "compare": false, 00:10:44.689 "compare_and_write": false, 00:10:44.689 "abort": false, 00:10:44.689 "seek_hole": false, 00:10:44.689 "seek_data": false, 00:10:44.689 "copy": false, 00:10:44.689 "nvme_iov_md": false 00:10:44.689 }, 00:10:44.689 "memory_domains": [ 00:10:44.689 { 00:10:44.689 "dma_device_id": "system", 00:10:44.689 "dma_device_type": 1 00:10:44.689 }, 00:10:44.689 { 00:10:44.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.689 "dma_device_type": 2 00:10:44.689 }, 00:10:44.689 { 00:10:44.689 "dma_device_id": "system", 00:10:44.689 "dma_device_type": 1 00:10:44.689 }, 00:10:44.689 { 00:10:44.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.689 "dma_device_type": 2 00:10:44.689 } 00:10:44.689 ], 00:10:44.689 "driver_specific": { 00:10:44.689 "raid": { 00:10:44.689 "uuid": "b51530de-1249-4520-9eb3-832398584181", 00:10:44.689 "strip_size_kb": 64, 00:10:44.689 "state": "online", 00:10:44.689 "raid_level": "raid0", 00:10:44.689 "superblock": false, 00:10:44.689 "num_base_bdevs": 2, 00:10:44.689 "num_base_bdevs_discovered": 2, 00:10:44.689 "num_base_bdevs_operational": 2, 00:10:44.689 "base_bdevs_list": [ 00:10:44.689 { 00:10:44.689 "name": "BaseBdev1", 00:10:44.689 "uuid": "17f04a3a-8491-410b-86be-a6f6f1451fd0", 00:10:44.689 "is_configured": true, 00:10:44.689 "data_offset": 0, 00:10:44.689 "data_size": 65536 00:10:44.689 }, 00:10:44.689 { 00:10:44.689 "name": "BaseBdev2", 00:10:44.689 "uuid": "81ace9ea-d8e7-43e0-abc1-0fd6abf2f22d", 00:10:44.689 "is_configured": true, 00:10:44.689 "data_offset": 0, 00:10:44.689 "data_size": 65536 00:10:44.689 } 00:10:44.689 ] 00:10:44.689 } 00:10:44.689 } 00:10:44.689 }' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:44.689 BaseBdev2' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.689 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.948 [2024-11-20 11:22:52.561689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.948 [2024-11-20 11:22:52.561733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.948 [2024-11-20 11:22:52.561837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.948 "name": "Existed_Raid", 00:10:44.948 "uuid": "b51530de-1249-4520-9eb3-832398584181", 00:10:44.948 "strip_size_kb": 64, 00:10:44.948 "state": "offline", 00:10:44.948 "raid_level": "raid0", 00:10:44.948 "superblock": false, 00:10:44.948 "num_base_bdevs": 2, 00:10:44.948 "num_base_bdevs_discovered": 1, 00:10:44.948 "num_base_bdevs_operational": 1, 00:10:44.948 "base_bdevs_list": [ 00:10:44.948 { 00:10:44.948 "name": null, 00:10:44.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.948 "is_configured": false, 00:10:44.948 "data_offset": 0, 00:10:44.948 "data_size": 65536 00:10:44.948 }, 00:10:44.948 { 00:10:44.948 "name": "BaseBdev2", 00:10:44.948 "uuid": "81ace9ea-d8e7-43e0-abc1-0fd6abf2f22d", 00:10:44.948 "is_configured": true, 00:10:44.948 "data_offset": 0, 00:10:44.948 "data_size": 65536 00:10:44.948 } 00:10:44.948 ] 00:10:44.948 }' 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.948 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.514 [2024-11-20 11:22:53.237749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.514 [2024-11-20 11:22:53.237844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.514 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.772 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:45.772 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:45.772 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:45.772 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60564 00:10:45.772 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60564 ']' 00:10:45.772 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60564 00:10:45.772 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:45.773 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.773 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60564 00:10:45.773 killing process with pid 60564 00:10:45.773 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.773 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.773 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60564' 00:10:45.773 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60564 00:10:45.773 [2024-11-20 11:22:53.418865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.773 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60564 00:10:45.773 [2024-11-20 11:22:53.435363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.708 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:46.708 00:10:46.708 real 0m5.655s 00:10:46.708 user 0m8.475s 00:10:46.708 sys 0m0.824s 00:10:46.708 11:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.708 ************************************ 00:10:46.708 END TEST raid_state_function_test 00:10:46.708 ************************************ 00:10:46.708 11:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.064 11:22:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:47.065 11:22:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.065 11:22:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.065 11:22:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 ************************************ 00:10:47.065 START TEST raid_state_function_test_sb 00:10:47.065 ************************************ 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:47.065 Process raid pid: 60828 00:10:47.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60828 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60828' 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60828 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60828 ']' 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.065 11:22:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 [2024-11-20 11:22:54.680954] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:47.065 [2024-11-20 11:22:54.681370] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.065 [2024-11-20 11:22:54.855879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.324 [2024-11-20 11:22:54.996664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.582 [2024-11-20 11:22:55.216470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.582 [2024-11-20 11:22:55.216522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.149 [2024-11-20 11:22:55.751585] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.149 [2024-11-20 11:22:55.751669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.149 [2024-11-20 11:22:55.751688] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.149 [2024-11-20 11:22:55.751705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.149 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.150 "name": "Existed_Raid", 00:10:48.150 "uuid": "db39fb24-aa08-4f45-baa6-021e1e39e03f", 00:10:48.150 "strip_size_kb": 64, 00:10:48.150 "state": "configuring", 00:10:48.150 "raid_level": "raid0", 00:10:48.150 "superblock": true, 00:10:48.150 "num_base_bdevs": 2, 00:10:48.150 "num_base_bdevs_discovered": 0, 00:10:48.150 "num_base_bdevs_operational": 2, 00:10:48.150 "base_bdevs_list": [ 00:10:48.150 { 00:10:48.150 "name": "BaseBdev1", 00:10:48.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.150 "is_configured": false, 00:10:48.150 "data_offset": 0, 00:10:48.150 "data_size": 0 00:10:48.150 }, 00:10:48.150 { 00:10:48.150 "name": "BaseBdev2", 00:10:48.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.150 "is_configured": false, 00:10:48.150 "data_offset": 0, 00:10:48.150 "data_size": 0 00:10:48.150 } 00:10:48.150 ] 00:10:48.150 }' 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.150 11:22:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.717 [2024-11-20 11:22:56.299664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.717 [2024-11-20 11:22:56.299870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.717 [2024-11-20 11:22:56.311739] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.717 [2024-11-20 11:22:56.311951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.717 [2024-11-20 11:22:56.312085] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.717 [2024-11-20 11:22:56.312220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.717 [2024-11-20 11:22:56.364202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.717 BaseBdev1 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.717 [ 00:10:48.717 { 00:10:48.717 "name": "BaseBdev1", 00:10:48.717 "aliases": [ 00:10:48.717 "b7545412-d259-46c4-bdc9-8303a12825ed" 00:10:48.717 ], 00:10:48.717 "product_name": "Malloc disk", 00:10:48.717 "block_size": 512, 00:10:48.717 "num_blocks": 65536, 00:10:48.717 "uuid": "b7545412-d259-46c4-bdc9-8303a12825ed", 00:10:48.717 "assigned_rate_limits": { 00:10:48.717 "rw_ios_per_sec": 0, 00:10:48.717 "rw_mbytes_per_sec": 0, 00:10:48.717 "r_mbytes_per_sec": 0, 00:10:48.717 "w_mbytes_per_sec": 0 00:10:48.717 }, 00:10:48.717 "claimed": true, 00:10:48.717 "claim_type": "exclusive_write", 00:10:48.717 "zoned": false, 00:10:48.717 "supported_io_types": { 00:10:48.717 "read": true, 00:10:48.717 "write": true, 00:10:48.717 "unmap": true, 00:10:48.717 "flush": true, 00:10:48.717 "reset": true, 00:10:48.717 "nvme_admin": false, 00:10:48.717 "nvme_io": false, 00:10:48.717 "nvme_io_md": false, 00:10:48.717 "write_zeroes": true, 00:10:48.717 "zcopy": true, 00:10:48.717 "get_zone_info": false, 00:10:48.717 "zone_management": false, 00:10:48.717 "zone_append": false, 00:10:48.717 "compare": false, 00:10:48.717 "compare_and_write": false, 00:10:48.717 "abort": true, 00:10:48.717 "seek_hole": false, 00:10:48.717 "seek_data": false, 00:10:48.717 "copy": true, 00:10:48.717 "nvme_iov_md": false 00:10:48.717 }, 00:10:48.717 "memory_domains": [ 00:10:48.717 { 00:10:48.717 "dma_device_id": "system", 00:10:48.717 "dma_device_type": 1 00:10:48.717 }, 00:10:48.717 { 00:10:48.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.717 "dma_device_type": 2 00:10:48.717 } 00:10:48.717 ], 00:10:48.717 "driver_specific": {} 00:10:48.717 } 00:10:48.717 ] 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.717 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.718 "name": "Existed_Raid", 00:10:48.718 "uuid": "67acebd4-3dee-4442-99d3-7741f24cf736", 00:10:48.718 "strip_size_kb": 64, 00:10:48.718 "state": "configuring", 00:10:48.718 "raid_level": "raid0", 00:10:48.718 "superblock": true, 00:10:48.718 "num_base_bdevs": 2, 00:10:48.718 "num_base_bdevs_discovered": 1, 00:10:48.718 "num_base_bdevs_operational": 2, 00:10:48.718 "base_bdevs_list": [ 00:10:48.718 { 00:10:48.718 "name": "BaseBdev1", 00:10:48.718 "uuid": "b7545412-d259-46c4-bdc9-8303a12825ed", 00:10:48.718 "is_configured": true, 00:10:48.718 "data_offset": 2048, 00:10:48.718 "data_size": 63488 00:10:48.718 }, 00:10:48.718 { 00:10:48.718 "name": "BaseBdev2", 00:10:48.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.718 "is_configured": false, 00:10:48.718 "data_offset": 0, 00:10:48.718 "data_size": 0 00:10:48.718 } 00:10:48.718 ] 00:10:48.718 }' 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.718 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.285 [2024-11-20 11:22:56.932394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.285 [2024-11-20 11:22:56.932582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.285 [2024-11-20 11:22:56.944444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.285 [2024-11-20 11:22:56.947034] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.285 [2024-11-20 11:22:56.947204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.285 11:22:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.285 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.285 "name": "Existed_Raid", 00:10:49.285 "uuid": "06d9d548-aa9c-4f6b-9061-d968815f694c", 00:10:49.285 "strip_size_kb": 64, 00:10:49.285 "state": "configuring", 00:10:49.285 "raid_level": "raid0", 00:10:49.285 "superblock": true, 00:10:49.285 "num_base_bdevs": 2, 00:10:49.285 "num_base_bdevs_discovered": 1, 00:10:49.285 "num_base_bdevs_operational": 2, 00:10:49.285 "base_bdevs_list": [ 00:10:49.285 { 00:10:49.285 "name": "BaseBdev1", 00:10:49.285 "uuid": "b7545412-d259-46c4-bdc9-8303a12825ed", 00:10:49.285 "is_configured": true, 00:10:49.285 "data_offset": 2048, 00:10:49.285 "data_size": 63488 00:10:49.285 }, 00:10:49.285 { 00:10:49.285 "name": "BaseBdev2", 00:10:49.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.285 "is_configured": false, 00:10:49.285 "data_offset": 0, 00:10:49.285 "data_size": 0 00:10:49.285 } 00:10:49.285 ] 00:10:49.285 }' 00:10:49.285 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.285 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.853 [2024-11-20 11:22:57.492210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.853 [2024-11-20 11:22:57.492547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.853 [2024-11-20 11:22:57.492567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:49.853 BaseBdev2 00:10:49.853 [2024-11-20 11:22:57.492942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:49.853 [2024-11-20 11:22:57.493139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.853 [2024-11-20 11:22:57.493170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:49.853 [2024-11-20 11:22:57.493338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.853 [ 00:10:49.853 { 00:10:49.853 "name": "BaseBdev2", 00:10:49.853 "aliases": [ 00:10:49.853 "719ccddd-f22b-4b0d-ab28-5c2e1f5beba6" 00:10:49.853 ], 00:10:49.853 "product_name": "Malloc disk", 00:10:49.853 "block_size": 512, 00:10:49.853 "num_blocks": 65536, 00:10:49.853 "uuid": "719ccddd-f22b-4b0d-ab28-5c2e1f5beba6", 00:10:49.853 "assigned_rate_limits": { 00:10:49.853 "rw_ios_per_sec": 0, 00:10:49.853 "rw_mbytes_per_sec": 0, 00:10:49.853 "r_mbytes_per_sec": 0, 00:10:49.853 "w_mbytes_per_sec": 0 00:10:49.853 }, 00:10:49.853 "claimed": true, 00:10:49.853 "claim_type": "exclusive_write", 00:10:49.853 "zoned": false, 00:10:49.853 "supported_io_types": { 00:10:49.853 "read": true, 00:10:49.853 "write": true, 00:10:49.853 "unmap": true, 00:10:49.853 "flush": true, 00:10:49.853 "reset": true, 00:10:49.853 "nvme_admin": false, 00:10:49.853 "nvme_io": false, 00:10:49.853 "nvme_io_md": false, 00:10:49.853 "write_zeroes": true, 00:10:49.853 "zcopy": true, 00:10:49.853 "get_zone_info": false, 00:10:49.853 "zone_management": false, 00:10:49.853 "zone_append": false, 00:10:49.853 "compare": false, 00:10:49.853 "compare_and_write": false, 00:10:49.853 "abort": true, 00:10:49.853 "seek_hole": false, 00:10:49.853 "seek_data": false, 00:10:49.853 "copy": true, 00:10:49.853 "nvme_iov_md": false 00:10:49.853 }, 00:10:49.853 "memory_domains": [ 00:10:49.853 { 00:10:49.853 "dma_device_id": "system", 00:10:49.853 "dma_device_type": 1 00:10:49.853 }, 00:10:49.853 { 00:10:49.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.853 "dma_device_type": 2 00:10:49.853 } 00:10:49.853 ], 00:10:49.853 "driver_specific": {} 00:10:49.853 } 00:10:49.853 ] 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.853 "name": "Existed_Raid", 00:10:49.853 "uuid": "06d9d548-aa9c-4f6b-9061-d968815f694c", 00:10:49.853 "strip_size_kb": 64, 00:10:49.853 "state": "online", 00:10:49.853 "raid_level": "raid0", 00:10:49.853 "superblock": true, 00:10:49.853 "num_base_bdevs": 2, 00:10:49.853 "num_base_bdevs_discovered": 2, 00:10:49.853 "num_base_bdevs_operational": 2, 00:10:49.853 "base_bdevs_list": [ 00:10:49.853 { 00:10:49.853 "name": "BaseBdev1", 00:10:49.853 "uuid": "b7545412-d259-46c4-bdc9-8303a12825ed", 00:10:49.853 "is_configured": true, 00:10:49.853 "data_offset": 2048, 00:10:49.853 "data_size": 63488 00:10:49.853 }, 00:10:49.853 { 00:10:49.853 "name": "BaseBdev2", 00:10:49.853 "uuid": "719ccddd-f22b-4b0d-ab28-5c2e1f5beba6", 00:10:49.853 "is_configured": true, 00:10:49.853 "data_offset": 2048, 00:10:49.853 "data_size": 63488 00:10:49.853 } 00:10:49.853 ] 00:10:49.853 }' 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.853 11:22:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 [2024-11-20 11:22:58.040815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.420 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.420 "name": "Existed_Raid", 00:10:50.420 "aliases": [ 00:10:50.420 "06d9d548-aa9c-4f6b-9061-d968815f694c" 00:10:50.420 ], 00:10:50.420 "product_name": "Raid Volume", 00:10:50.420 "block_size": 512, 00:10:50.420 "num_blocks": 126976, 00:10:50.420 "uuid": "06d9d548-aa9c-4f6b-9061-d968815f694c", 00:10:50.420 "assigned_rate_limits": { 00:10:50.420 "rw_ios_per_sec": 0, 00:10:50.420 "rw_mbytes_per_sec": 0, 00:10:50.420 "r_mbytes_per_sec": 0, 00:10:50.420 "w_mbytes_per_sec": 0 00:10:50.420 }, 00:10:50.420 "claimed": false, 00:10:50.420 "zoned": false, 00:10:50.420 "supported_io_types": { 00:10:50.420 "read": true, 00:10:50.420 "write": true, 00:10:50.420 "unmap": true, 00:10:50.420 "flush": true, 00:10:50.420 "reset": true, 00:10:50.420 "nvme_admin": false, 00:10:50.420 "nvme_io": false, 00:10:50.420 "nvme_io_md": false, 00:10:50.420 "write_zeroes": true, 00:10:50.420 "zcopy": false, 00:10:50.420 "get_zone_info": false, 00:10:50.420 "zone_management": false, 00:10:50.420 "zone_append": false, 00:10:50.420 "compare": false, 00:10:50.420 "compare_and_write": false, 00:10:50.420 "abort": false, 00:10:50.420 "seek_hole": false, 00:10:50.420 "seek_data": false, 00:10:50.420 "copy": false, 00:10:50.420 "nvme_iov_md": false 00:10:50.420 }, 00:10:50.420 "memory_domains": [ 00:10:50.420 { 00:10:50.420 "dma_device_id": "system", 00:10:50.420 "dma_device_type": 1 00:10:50.420 }, 00:10:50.420 { 00:10:50.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.420 "dma_device_type": 2 00:10:50.420 }, 00:10:50.420 { 00:10:50.420 "dma_device_id": "system", 00:10:50.420 "dma_device_type": 1 00:10:50.420 }, 00:10:50.420 { 00:10:50.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.421 "dma_device_type": 2 00:10:50.421 } 00:10:50.421 ], 00:10:50.421 "driver_specific": { 00:10:50.421 "raid": { 00:10:50.421 "uuid": "06d9d548-aa9c-4f6b-9061-d968815f694c", 00:10:50.421 "strip_size_kb": 64, 00:10:50.421 "state": "online", 00:10:50.421 "raid_level": "raid0", 00:10:50.421 "superblock": true, 00:10:50.421 "num_base_bdevs": 2, 00:10:50.421 "num_base_bdevs_discovered": 2, 00:10:50.421 "num_base_bdevs_operational": 2, 00:10:50.421 "base_bdevs_list": [ 00:10:50.421 { 00:10:50.421 "name": "BaseBdev1", 00:10:50.421 "uuid": "b7545412-d259-46c4-bdc9-8303a12825ed", 00:10:50.421 "is_configured": true, 00:10:50.421 "data_offset": 2048, 00:10:50.421 "data_size": 63488 00:10:50.421 }, 00:10:50.421 { 00:10:50.421 "name": "BaseBdev2", 00:10:50.421 "uuid": "719ccddd-f22b-4b0d-ab28-5c2e1f5beba6", 00:10:50.421 "is_configured": true, 00:10:50.421 "data_offset": 2048, 00:10:50.421 "data_size": 63488 00:10:50.421 } 00:10:50.421 ] 00:10:50.421 } 00:10:50.421 } 00:10:50.421 }' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.421 BaseBdev2' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.421 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.680 [2024-11-20 11:22:58.304565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.680 [2024-11-20 11:22:58.304608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.680 [2024-11-20 11:22:58.304724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.680 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.680 "name": "Existed_Raid", 00:10:50.680 "uuid": "06d9d548-aa9c-4f6b-9061-d968815f694c", 00:10:50.680 "strip_size_kb": 64, 00:10:50.680 "state": "offline", 00:10:50.680 "raid_level": "raid0", 00:10:50.680 "superblock": true, 00:10:50.680 "num_base_bdevs": 2, 00:10:50.680 "num_base_bdevs_discovered": 1, 00:10:50.680 "num_base_bdevs_operational": 1, 00:10:50.680 "base_bdevs_list": [ 00:10:50.680 { 00:10:50.681 "name": null, 00:10:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.681 "is_configured": false, 00:10:50.681 "data_offset": 0, 00:10:50.681 "data_size": 63488 00:10:50.681 }, 00:10:50.681 { 00:10:50.681 "name": "BaseBdev2", 00:10:50.681 "uuid": "719ccddd-f22b-4b0d-ab28-5c2e1f5beba6", 00:10:50.681 "is_configured": true, 00:10:50.681 "data_offset": 2048, 00:10:50.681 "data_size": 63488 00:10:50.681 } 00:10:50.681 ] 00:10:50.681 }' 00:10:50.681 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.681 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.247 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.248 11:22:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.248 [2024-11-20 11:22:58.965252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.248 [2024-11-20 11:22:58.965326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.248 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60828 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60828 ']' 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60828 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60828 00:10:51.506 killing process with pid 60828 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60828' 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60828 00:10:51.506 [2024-11-20 11:22:59.141767] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.506 11:22:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60828 00:10:51.506 [2024-11-20 11:22:59.156683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.442 ************************************ 00:10:52.442 END TEST raid_state_function_test_sb 00:10:52.442 ************************************ 00:10:52.442 11:23:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:52.442 00:10:52.442 real 0m5.609s 00:10:52.442 user 0m8.499s 00:10:52.442 sys 0m0.796s 00:10:52.442 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.442 11:23:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.442 11:23:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:52.442 11:23:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.442 11:23:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.442 11:23:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.442 ************************************ 00:10:52.442 START TEST raid_superblock_test 00:10:52.442 ************************************ 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:52.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61080 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61080 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61080 ']' 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.442 11:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.700 [2024-11-20 11:23:00.343220] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:52.700 [2024-11-20 11:23:00.343705] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61080 ] 00:10:52.700 [2024-11-20 11:23:00.513987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.957 [2024-11-20 11:23:00.647674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.214 [2024-11-20 11:23:00.857731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.214 [2024-11-20 11:23:00.857976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.781 malloc1 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.781 [2024-11-20 11:23:01.372676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:53.781 [2024-11-20 11:23:01.372799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.781 [2024-11-20 11:23:01.372837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:53.781 [2024-11-20 11:23:01.372854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.781 [2024-11-20 11:23:01.376068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.781 [2024-11-20 11:23:01.376111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:53.781 pt1 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.781 malloc2 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.781 [2024-11-20 11:23:01.428458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:53.781 [2024-11-20 11:23:01.428553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.781 [2024-11-20 11:23:01.428583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:53.781 [2024-11-20 11:23:01.428597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.781 [2024-11-20 11:23:01.431654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.781 [2024-11-20 11:23:01.431721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:53.781 pt2 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:53.781 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.782 [2024-11-20 11:23:01.440593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:53.782 [2024-11-20 11:23:01.443374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.782 [2024-11-20 11:23:01.443576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:53.782 [2024-11-20 11:23:01.443594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:53.782 [2024-11-20 11:23:01.443945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:53.782 [2024-11-20 11:23:01.444215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:53.782 [2024-11-20 11:23:01.444238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:53.782 [2024-11-20 11:23:01.444429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.782 "name": "raid_bdev1", 00:10:53.782 "uuid": "5c8dfb01-c642-444f-ac90-469b0005ede4", 00:10:53.782 "strip_size_kb": 64, 00:10:53.782 "state": "online", 00:10:53.782 "raid_level": "raid0", 00:10:53.782 "superblock": true, 00:10:53.782 "num_base_bdevs": 2, 00:10:53.782 "num_base_bdevs_discovered": 2, 00:10:53.782 "num_base_bdevs_operational": 2, 00:10:53.782 "base_bdevs_list": [ 00:10:53.782 { 00:10:53.782 "name": "pt1", 00:10:53.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:53.782 "is_configured": true, 00:10:53.782 "data_offset": 2048, 00:10:53.782 "data_size": 63488 00:10:53.782 }, 00:10:53.782 { 00:10:53.782 "name": "pt2", 00:10:53.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.782 "is_configured": true, 00:10:53.782 "data_offset": 2048, 00:10:53.782 "data_size": 63488 00:10:53.782 } 00:10:53.782 ] 00:10:53.782 }' 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.782 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.348 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:54.348 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:54.348 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.348 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.348 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.349 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.349 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.349 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.349 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.349 11:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.349 [2024-11-20 11:23:01.977227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.349 11:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.349 "name": "raid_bdev1", 00:10:54.349 "aliases": [ 00:10:54.349 "5c8dfb01-c642-444f-ac90-469b0005ede4" 00:10:54.349 ], 00:10:54.349 "product_name": "Raid Volume", 00:10:54.349 "block_size": 512, 00:10:54.349 "num_blocks": 126976, 00:10:54.349 "uuid": "5c8dfb01-c642-444f-ac90-469b0005ede4", 00:10:54.349 "assigned_rate_limits": { 00:10:54.349 "rw_ios_per_sec": 0, 00:10:54.349 "rw_mbytes_per_sec": 0, 00:10:54.349 "r_mbytes_per_sec": 0, 00:10:54.349 "w_mbytes_per_sec": 0 00:10:54.349 }, 00:10:54.349 "claimed": false, 00:10:54.349 "zoned": false, 00:10:54.349 "supported_io_types": { 00:10:54.349 "read": true, 00:10:54.349 "write": true, 00:10:54.349 "unmap": true, 00:10:54.349 "flush": true, 00:10:54.349 "reset": true, 00:10:54.349 "nvme_admin": false, 00:10:54.349 "nvme_io": false, 00:10:54.349 "nvme_io_md": false, 00:10:54.349 "write_zeroes": true, 00:10:54.349 "zcopy": false, 00:10:54.349 "get_zone_info": false, 00:10:54.349 "zone_management": false, 00:10:54.349 "zone_append": false, 00:10:54.349 "compare": false, 00:10:54.349 "compare_and_write": false, 00:10:54.349 "abort": false, 00:10:54.349 "seek_hole": false, 00:10:54.349 "seek_data": false, 00:10:54.349 "copy": false, 00:10:54.349 "nvme_iov_md": false 00:10:54.349 }, 00:10:54.349 "memory_domains": [ 00:10:54.349 { 00:10:54.349 "dma_device_id": "system", 00:10:54.349 "dma_device_type": 1 00:10:54.349 }, 00:10:54.349 { 00:10:54.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.349 "dma_device_type": 2 00:10:54.349 }, 00:10:54.349 { 00:10:54.349 "dma_device_id": "system", 00:10:54.349 "dma_device_type": 1 00:10:54.349 }, 00:10:54.349 { 00:10:54.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.349 "dma_device_type": 2 00:10:54.349 } 00:10:54.349 ], 00:10:54.349 "driver_specific": { 00:10:54.349 "raid": { 00:10:54.349 "uuid": "5c8dfb01-c642-444f-ac90-469b0005ede4", 00:10:54.349 "strip_size_kb": 64, 00:10:54.349 "state": "online", 00:10:54.349 "raid_level": "raid0", 00:10:54.349 "superblock": true, 00:10:54.349 "num_base_bdevs": 2, 00:10:54.349 "num_base_bdevs_discovered": 2, 00:10:54.349 "num_base_bdevs_operational": 2, 00:10:54.349 "base_bdevs_list": [ 00:10:54.349 { 00:10:54.349 "name": "pt1", 00:10:54.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.349 "is_configured": true, 00:10:54.349 "data_offset": 2048, 00:10:54.349 "data_size": 63488 00:10:54.349 }, 00:10:54.349 { 00:10:54.349 "name": "pt2", 00:10:54.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.349 "is_configured": true, 00:10:54.349 "data_offset": 2048, 00:10:54.349 "data_size": 63488 00:10:54.349 } 00:10:54.349 ] 00:10:54.349 } 00:10:54.349 } 00:10:54.349 }' 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:54.349 pt2' 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.349 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.607 [2024-11-20 11:23:02.237238] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5c8dfb01-c642-444f-ac90-469b0005ede4 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5c8dfb01-c642-444f-ac90-469b0005ede4 ']' 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.607 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.607 [2024-11-20 11:23:02.292864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.607 [2024-11-20 11:23:02.293054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.607 [2024-11-20 11:23:02.293198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.607 [2024-11-20 11:23:02.293265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.607 [2024-11-20 11:23:02.293289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.608 [2024-11-20 11:23:02.432974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:54.608 [2024-11-20 11:23:02.435555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:54.608 [2024-11-20 11:23:02.435665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:54.608 [2024-11-20 11:23:02.435896] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:54.608 [2024-11-20 11:23:02.435990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.608 [2024-11-20 11:23:02.436121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:54.608 request: 00:10:54.608 { 00:10:54.608 "name": "raid_bdev1", 00:10:54.608 "raid_level": "raid0", 00:10:54.608 "base_bdevs": [ 00:10:54.608 "malloc1", 00:10:54.608 "malloc2" 00:10:54.608 ], 00:10:54.608 "strip_size_kb": 64, 00:10:54.608 "superblock": false, 00:10:54.608 "method": "bdev_raid_create", 00:10:54.608 "req_id": 1 00:10:54.608 } 00:10:54.608 Got JSON-RPC error response 00:10:54.608 response: 00:10:54.608 { 00:10:54.608 "code": -17, 00:10:54.608 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:54.608 } 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.608 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.866 [2024-11-20 11:23:02.501005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:54.866 [2024-11-20 11:23:02.501312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.866 [2024-11-20 11:23:02.501354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:54.866 [2024-11-20 11:23:02.501375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.866 [2024-11-20 11:23:02.504532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.866 [2024-11-20 11:23:02.504733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:54.866 [2024-11-20 11:23:02.504860] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:54.866 [2024-11-20 11:23:02.504942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:54.866 pt1 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.866 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.867 "name": "raid_bdev1", 00:10:54.867 "uuid": "5c8dfb01-c642-444f-ac90-469b0005ede4", 00:10:54.867 "strip_size_kb": 64, 00:10:54.867 "state": "configuring", 00:10:54.867 "raid_level": "raid0", 00:10:54.867 "superblock": true, 00:10:54.867 "num_base_bdevs": 2, 00:10:54.867 "num_base_bdevs_discovered": 1, 00:10:54.867 "num_base_bdevs_operational": 2, 00:10:54.867 "base_bdevs_list": [ 00:10:54.867 { 00:10:54.867 "name": "pt1", 00:10:54.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:54.867 "is_configured": true, 00:10:54.867 "data_offset": 2048, 00:10:54.867 "data_size": 63488 00:10:54.867 }, 00:10:54.867 { 00:10:54.867 "name": null, 00:10:54.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.867 "is_configured": false, 00:10:54.867 "data_offset": 2048, 00:10:54.867 "data_size": 63488 00:10:54.867 } 00:10:54.867 ] 00:10:54.867 }' 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.867 11:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.433 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:55.433 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:55.433 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:55.433 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:55.433 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.433 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.433 [2024-11-20 11:23:03.017393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:55.433 [2024-11-20 11:23:03.017501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.433 [2024-11-20 11:23:03.017534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:55.433 [2024-11-20 11:23:03.017553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.434 [2024-11-20 11:23:03.018205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.434 [2024-11-20 11:23:03.018259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:55.434 [2024-11-20 11:23:03.018378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:55.434 [2024-11-20 11:23:03.018417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:55.434 [2024-11-20 11:23:03.018562] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.434 [2024-11-20 11:23:03.018585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:55.434 [2024-11-20 11:23:03.018911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:55.434 [2024-11-20 11:23:03.019214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.434 [2024-11-20 11:23:03.019237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:55.434 [2024-11-20 11:23:03.019424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.434 pt2 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.434 "name": "raid_bdev1", 00:10:55.434 "uuid": "5c8dfb01-c642-444f-ac90-469b0005ede4", 00:10:55.434 "strip_size_kb": 64, 00:10:55.434 "state": "online", 00:10:55.434 "raid_level": "raid0", 00:10:55.434 "superblock": true, 00:10:55.434 "num_base_bdevs": 2, 00:10:55.434 "num_base_bdevs_discovered": 2, 00:10:55.434 "num_base_bdevs_operational": 2, 00:10:55.434 "base_bdevs_list": [ 00:10:55.434 { 00:10:55.434 "name": "pt1", 00:10:55.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:55.434 "is_configured": true, 00:10:55.434 "data_offset": 2048, 00:10:55.434 "data_size": 63488 00:10:55.434 }, 00:10:55.434 { 00:10:55.434 "name": "pt2", 00:10:55.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.434 "is_configured": true, 00:10:55.434 "data_offset": 2048, 00:10:55.434 "data_size": 63488 00:10:55.434 } 00:10:55.434 ] 00:10:55.434 }' 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.434 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.001 [2024-11-20 11:23:03.549878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.001 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.001 "name": "raid_bdev1", 00:10:56.001 "aliases": [ 00:10:56.001 "5c8dfb01-c642-444f-ac90-469b0005ede4" 00:10:56.001 ], 00:10:56.001 "product_name": "Raid Volume", 00:10:56.001 "block_size": 512, 00:10:56.001 "num_blocks": 126976, 00:10:56.001 "uuid": "5c8dfb01-c642-444f-ac90-469b0005ede4", 00:10:56.001 "assigned_rate_limits": { 00:10:56.001 "rw_ios_per_sec": 0, 00:10:56.001 "rw_mbytes_per_sec": 0, 00:10:56.001 "r_mbytes_per_sec": 0, 00:10:56.001 "w_mbytes_per_sec": 0 00:10:56.001 }, 00:10:56.001 "claimed": false, 00:10:56.001 "zoned": false, 00:10:56.001 "supported_io_types": { 00:10:56.001 "read": true, 00:10:56.001 "write": true, 00:10:56.001 "unmap": true, 00:10:56.001 "flush": true, 00:10:56.001 "reset": true, 00:10:56.001 "nvme_admin": false, 00:10:56.001 "nvme_io": false, 00:10:56.001 "nvme_io_md": false, 00:10:56.001 "write_zeroes": true, 00:10:56.001 "zcopy": false, 00:10:56.001 "get_zone_info": false, 00:10:56.001 "zone_management": false, 00:10:56.001 "zone_append": false, 00:10:56.001 "compare": false, 00:10:56.001 "compare_and_write": false, 00:10:56.001 "abort": false, 00:10:56.001 "seek_hole": false, 00:10:56.001 "seek_data": false, 00:10:56.001 "copy": false, 00:10:56.001 "nvme_iov_md": false 00:10:56.001 }, 00:10:56.001 "memory_domains": [ 00:10:56.001 { 00:10:56.001 "dma_device_id": "system", 00:10:56.001 "dma_device_type": 1 00:10:56.001 }, 00:10:56.001 { 00:10:56.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.001 "dma_device_type": 2 00:10:56.001 }, 00:10:56.001 { 00:10:56.001 "dma_device_id": "system", 00:10:56.001 "dma_device_type": 1 00:10:56.001 }, 00:10:56.001 { 00:10:56.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.001 "dma_device_type": 2 00:10:56.001 } 00:10:56.001 ], 00:10:56.001 "driver_specific": { 00:10:56.001 "raid": { 00:10:56.001 "uuid": "5c8dfb01-c642-444f-ac90-469b0005ede4", 00:10:56.001 "strip_size_kb": 64, 00:10:56.001 "state": "online", 00:10:56.001 "raid_level": "raid0", 00:10:56.001 "superblock": true, 00:10:56.001 "num_base_bdevs": 2, 00:10:56.001 "num_base_bdevs_discovered": 2, 00:10:56.001 "num_base_bdevs_operational": 2, 00:10:56.001 "base_bdevs_list": [ 00:10:56.001 { 00:10:56.001 "name": "pt1", 00:10:56.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.001 "is_configured": true, 00:10:56.001 "data_offset": 2048, 00:10:56.001 "data_size": 63488 00:10:56.001 }, 00:10:56.001 { 00:10:56.001 "name": "pt2", 00:10:56.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.001 "is_configured": true, 00:10:56.001 "data_offset": 2048, 00:10:56.002 "data_size": 63488 00:10:56.002 } 00:10:56.002 ] 00:10:56.002 } 00:10:56.002 } 00:10:56.002 }' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:56.002 pt2' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.002 [2024-11-20 11:23:03.805985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.002 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5c8dfb01-c642-444f-ac90-469b0005ede4 '!=' 5c8dfb01-c642-444f-ac90-469b0005ede4 ']' 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61080 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61080 ']' 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61080 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61080 00:10:56.263 killing process with pid 61080 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61080' 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61080 00:10:56.263 [2024-11-20 11:23:03.889428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.263 11:23:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61080 00:10:56.263 [2024-11-20 11:23:03.889544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.263 [2024-11-20 11:23:03.889653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.263 [2024-11-20 11:23:03.889683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:56.263 [2024-11-20 11:23:04.069910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.642 ************************************ 00:10:57.642 END TEST raid_superblock_test 00:10:57.642 ************************************ 00:10:57.642 11:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:57.642 00:10:57.642 real 0m4.825s 00:10:57.642 user 0m7.135s 00:10:57.642 sys 0m0.702s 00:10:57.642 11:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.642 11:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.642 11:23:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:57.642 11:23:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.642 11:23:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.642 11:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.642 ************************************ 00:10:57.642 START TEST raid_read_error_test 00:10:57.642 ************************************ 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:57.642 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xFC5sNoJrm 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61297 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61297 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61297 ']' 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.643 11:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.643 [2024-11-20 11:23:05.254694] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:10:57.643 [2024-11-20 11:23:05.254897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61297 ] 00:10:57.643 [2024-11-20 11:23:05.469009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.902 [2024-11-20 11:23:05.606340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.161 [2024-11-20 11:23:05.812176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.161 [2024-11-20 11:23:05.812248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.419 BaseBdev1_malloc 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.419 true 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.419 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.419 [2024-11-20 11:23:06.259848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.420 [2024-11-20 11:23:06.259917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.420 [2024-11-20 11:23:06.259947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:58.420 [2024-11-20 11:23:06.259965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.420 [2024-11-20 11:23:06.262764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.420 [2024-11-20 11:23:06.262818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.679 BaseBdev1 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.679 BaseBdev2_malloc 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.679 true 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.679 [2024-11-20 11:23:06.319797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.679 [2024-11-20 11:23:06.320047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.679 [2024-11-20 11:23:06.320085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:58.679 [2024-11-20 11:23:06.320103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.679 [2024-11-20 11:23:06.323093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.679 [2024-11-20 11:23:06.323306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.679 BaseBdev2 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.679 [2024-11-20 11:23:06.332096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.679 [2024-11-20 11:23:06.334839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.679 [2024-11-20 11:23:06.335124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.679 [2024-11-20 11:23:06.335149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:58.679 [2024-11-20 11:23:06.335465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:58.679 [2024-11-20 11:23:06.335754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.679 [2024-11-20 11:23:06.335774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:58.679 [2024-11-20 11:23:06.336043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.679 "name": "raid_bdev1", 00:10:58.679 "uuid": "a3d602d6-5ce6-4ba1-b729-8db69b147cd8", 00:10:58.679 "strip_size_kb": 64, 00:10:58.679 "state": "online", 00:10:58.679 "raid_level": "raid0", 00:10:58.679 "superblock": true, 00:10:58.679 "num_base_bdevs": 2, 00:10:58.679 "num_base_bdevs_discovered": 2, 00:10:58.679 "num_base_bdevs_operational": 2, 00:10:58.679 "base_bdevs_list": [ 00:10:58.679 { 00:10:58.679 "name": "BaseBdev1", 00:10:58.679 "uuid": "d43a9a4c-0f28-5e81-a2dc-c74c3515e1e1", 00:10:58.679 "is_configured": true, 00:10:58.679 "data_offset": 2048, 00:10:58.679 "data_size": 63488 00:10:58.679 }, 00:10:58.679 { 00:10:58.679 "name": "BaseBdev2", 00:10:58.679 "uuid": "d695b486-8ab1-56d9-950e-747c438f5811", 00:10:58.679 "is_configured": true, 00:10:58.679 "data_offset": 2048, 00:10:58.679 "data_size": 63488 00:10:58.679 } 00:10:58.679 ] 00:10:58.679 }' 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.679 11:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.249 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.249 11:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.249 [2024-11-20 11:23:06.957696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.186 "name": "raid_bdev1", 00:11:00.186 "uuid": "a3d602d6-5ce6-4ba1-b729-8db69b147cd8", 00:11:00.186 "strip_size_kb": 64, 00:11:00.186 "state": "online", 00:11:00.186 "raid_level": "raid0", 00:11:00.186 "superblock": true, 00:11:00.186 "num_base_bdevs": 2, 00:11:00.186 "num_base_bdevs_discovered": 2, 00:11:00.186 "num_base_bdevs_operational": 2, 00:11:00.186 "base_bdevs_list": [ 00:11:00.186 { 00:11:00.186 "name": "BaseBdev1", 00:11:00.186 "uuid": "d43a9a4c-0f28-5e81-a2dc-c74c3515e1e1", 00:11:00.186 "is_configured": true, 00:11:00.186 "data_offset": 2048, 00:11:00.186 "data_size": 63488 00:11:00.186 }, 00:11:00.186 { 00:11:00.186 "name": "BaseBdev2", 00:11:00.186 "uuid": "d695b486-8ab1-56d9-950e-747c438f5811", 00:11:00.186 "is_configured": true, 00:11:00.186 "data_offset": 2048, 00:11:00.186 "data_size": 63488 00:11:00.186 } 00:11:00.186 ] 00:11:00.186 }' 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.186 11:23:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.753 [2024-11-20 11:23:08.380522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.753 [2024-11-20 11:23:08.380739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.753 [2024-11-20 11:23:08.384275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.753 [2024-11-20 11:23:08.384456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.753 [2024-11-20 11:23:08.384516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.753 [2024-11-20 11:23:08.384536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:00.753 { 00:11:00.753 "results": [ 00:11:00.753 { 00:11:00.753 "job": "raid_bdev1", 00:11:00.753 "core_mask": "0x1", 00:11:00.753 "workload": "randrw", 00:11:00.753 "percentage": 50, 00:11:00.753 "status": "finished", 00:11:00.753 "queue_depth": 1, 00:11:00.753 "io_size": 131072, 00:11:00.753 "runtime": 1.420743, 00:11:00.753 "iops": 10999.174375661185, 00:11:00.753 "mibps": 1374.8967969576481, 00:11:00.753 "io_failed": 1, 00:11:00.753 "io_timeout": 0, 00:11:00.753 "avg_latency_us": 126.90968285361939, 00:11:00.753 "min_latency_us": 42.35636363636364, 00:11:00.753 "max_latency_us": 1884.16 00:11:00.753 } 00:11:00.753 ], 00:11:00.753 "core_count": 1 00:11:00.753 } 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61297 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61297 ']' 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61297 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61297 00:11:00.753 killing process with pid 61297 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61297' 00:11:00.753 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61297 00:11:00.754 [2024-11-20 11:23:08.422183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.754 11:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61297 00:11:00.754 [2024-11-20 11:23:08.550584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xFC5sNoJrm 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:02.134 00:11:02.134 real 0m4.531s 00:11:02.134 user 0m5.623s 00:11:02.134 sys 0m0.574s 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.134 ************************************ 00:11:02.134 END TEST raid_read_error_test 00:11:02.134 ************************************ 00:11:02.134 11:23:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.134 11:23:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:11:02.134 11:23:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.134 11:23:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.134 11:23:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.134 ************************************ 00:11:02.134 START TEST raid_write_error_test 00:11:02.134 ************************************ 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xPPFPcG8Rz 00:11:02.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61443 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61443 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61443 ']' 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.134 11:23:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.134 [2024-11-20 11:23:09.837329] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:02.134 [2024-11-20 11:23:09.837539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:11:02.393 [2024-11-20 11:23:10.023925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.393 [2024-11-20 11:23:10.154498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.652 [2024-11-20 11:23:10.347582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.652 [2024-11-20 11:23:10.347644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.220 BaseBdev1_malloc 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.220 true 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.220 [2024-11-20 11:23:10.838775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.220 [2024-11-20 11:23:10.838986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.220 [2024-11-20 11:23:10.839040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:03.220 [2024-11-20 11:23:10.839068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.220 [2024-11-20 11:23:10.842282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.220 [2024-11-20 11:23:10.842465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.220 BaseBdev1 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.220 BaseBdev2_malloc 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.220 true 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.220 [2024-11-20 11:23:10.904792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.220 [2024-11-20 11:23:10.904866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.220 [2024-11-20 11:23:10.904903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:03.220 [2024-11-20 11:23:10.904928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.220 [2024-11-20 11:23:10.907942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.220 [2024-11-20 11:23:10.908024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.220 BaseBdev2 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.220 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.220 [2024-11-20 11:23:10.917060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.220 [2024-11-20 11:23:10.919553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.220 [2024-11-20 11:23:10.919857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:03.221 [2024-11-20 11:23:10.919883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:03.221 [2024-11-20 11:23:10.920235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:03.221 [2024-11-20 11:23:10.920525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:03.221 [2024-11-20 11:23:10.920544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:03.221 [2024-11-20 11:23:10.920790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.221 "name": "raid_bdev1", 00:11:03.221 "uuid": "6b358a62-b9ef-4b98-96a5-a52b18f42314", 00:11:03.221 "strip_size_kb": 64, 00:11:03.221 "state": "online", 00:11:03.221 "raid_level": "raid0", 00:11:03.221 "superblock": true, 00:11:03.221 "num_base_bdevs": 2, 00:11:03.221 "num_base_bdevs_discovered": 2, 00:11:03.221 "num_base_bdevs_operational": 2, 00:11:03.221 "base_bdevs_list": [ 00:11:03.221 { 00:11:03.221 "name": "BaseBdev1", 00:11:03.221 "uuid": "d26a3b7a-847b-5489-8eea-7484f7622496", 00:11:03.221 "is_configured": true, 00:11:03.221 "data_offset": 2048, 00:11:03.221 "data_size": 63488 00:11:03.221 }, 00:11:03.221 { 00:11:03.221 "name": "BaseBdev2", 00:11:03.221 "uuid": "40fc71fb-8d8d-5a6c-b739-ff40089ee4ab", 00:11:03.221 "is_configured": true, 00:11:03.221 "data_offset": 2048, 00:11:03.221 "data_size": 63488 00:11:03.221 } 00:11:03.221 ] 00:11:03.221 }' 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.221 11:23:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.788 11:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:03.788 11:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:03.788 [2024-11-20 11:23:11.610752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.765 "name": "raid_bdev1", 00:11:04.765 "uuid": "6b358a62-b9ef-4b98-96a5-a52b18f42314", 00:11:04.765 "strip_size_kb": 64, 00:11:04.765 "state": "online", 00:11:04.765 "raid_level": "raid0", 00:11:04.765 "superblock": true, 00:11:04.765 "num_base_bdevs": 2, 00:11:04.765 "num_base_bdevs_discovered": 2, 00:11:04.765 "num_base_bdevs_operational": 2, 00:11:04.765 "base_bdevs_list": [ 00:11:04.765 { 00:11:04.765 "name": "BaseBdev1", 00:11:04.765 "uuid": "d26a3b7a-847b-5489-8eea-7484f7622496", 00:11:04.765 "is_configured": true, 00:11:04.765 "data_offset": 2048, 00:11:04.765 "data_size": 63488 00:11:04.765 }, 00:11:04.765 { 00:11:04.765 "name": "BaseBdev2", 00:11:04.765 "uuid": "40fc71fb-8d8d-5a6c-b739-ff40089ee4ab", 00:11:04.765 "is_configured": true, 00:11:04.765 "data_offset": 2048, 00:11:04.765 "data_size": 63488 00:11:04.765 } 00:11:04.765 ] 00:11:04.765 }' 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.765 11:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 11:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:05.333 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.333 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 [2024-11-20 11:23:13.030144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:05.334 [2024-11-20 11:23:13.030406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.334 [2024-11-20 11:23:13.034034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.334 [2024-11-20 11:23:13.034279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.334 [2024-11-20 11:23:13.034369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:11:05.334 "results": [ 00:11:05.334 { 00:11:05.334 "job": "raid_bdev1", 00:11:05.334 "core_mask": "0x1", 00:11:05.334 "workload": "randrw", 00:11:05.334 "percentage": 50, 00:11:05.334 "status": "finished", 00:11:05.334 "queue_depth": 1, 00:11:05.334 "io_size": 131072, 00:11:05.334 "runtime": 1.416919, 00:11:05.334 "iops": 10833.364504251831, 00:11:05.334 "mibps": 1354.1705630314789, 00:11:05.334 "io_failed": 1, 00:11:05.334 "io_timeout": 0, 00:11:05.334 "avg_latency_us": 129.17322531549615, 00:11:05.334 "min_latency_us": 37.70181818181818, 00:11:05.334 "max_latency_us": 1966.08 00:11:05.334 } 00:11:05.334 ], 00:11:05.334 "core_count": 1 00:11:05.334 } 00:11:05.334 ee all in destruct 00:11:05.334 [2024-11-20 11:23:13.034590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61443 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61443 ']' 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61443 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61443 00:11:05.334 killing process with pid 61443 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61443' 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61443 00:11:05.334 [2024-11-20 11:23:13.075557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.334 11:23:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61443 00:11:05.592 [2024-11-20 11:23:13.197512] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xPPFPcG8Rz 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:06.529 00:11:06.529 real 0m4.563s 00:11:06.529 user 0m5.734s 00:11:06.529 sys 0m0.581s 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.529 ************************************ 00:11:06.529 END TEST raid_write_error_test 00:11:06.529 ************************************ 00:11:06.529 11:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.529 11:23:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:06.529 11:23:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:11:06.529 11:23:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:06.529 11:23:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.529 11:23:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.529 ************************************ 00:11:06.529 START TEST raid_state_function_test 00:11:06.529 ************************************ 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61581 00:11:06.529 Process raid pid: 61581 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61581' 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61581 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61581 ']' 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.529 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.788 [2024-11-20 11:23:14.436057] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:06.788 [2024-11-20 11:23:14.436247] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.788 [2024-11-20 11:23:14.611364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.047 [2024-11-20 11:23:14.744129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.305 [2024-11-20 11:23:14.950828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.305 [2024-11-20 11:23:14.951079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.873 [2024-11-20 11:23:15.457936] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.873 [2024-11-20 11:23:15.458009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.873 [2024-11-20 11:23:15.458027] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.873 [2024-11-20 11:23:15.458044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.873 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.873 "name": "Existed_Raid", 00:11:07.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.873 "strip_size_kb": 64, 00:11:07.873 "state": "configuring", 00:11:07.873 "raid_level": "concat", 00:11:07.873 "superblock": false, 00:11:07.873 "num_base_bdevs": 2, 00:11:07.873 "num_base_bdevs_discovered": 0, 00:11:07.873 "num_base_bdevs_operational": 2, 00:11:07.873 "base_bdevs_list": [ 00:11:07.873 { 00:11:07.873 "name": "BaseBdev1", 00:11:07.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.873 "is_configured": false, 00:11:07.873 "data_offset": 0, 00:11:07.873 "data_size": 0 00:11:07.873 }, 00:11:07.873 { 00:11:07.873 "name": "BaseBdev2", 00:11:07.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.873 "is_configured": false, 00:11:07.874 "data_offset": 0, 00:11:07.874 "data_size": 0 00:11:07.874 } 00:11:07.874 ] 00:11:07.874 }' 00:11:07.874 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.874 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.440 [2024-11-20 11:23:15.982019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.440 [2024-11-20 11:23:15.982062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.440 [2024-11-20 11:23:15.994013] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.440 [2024-11-20 11:23:15.994197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.440 [2024-11-20 11:23:15.994318] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.440 [2024-11-20 11:23:15.994381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.440 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.441 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.441 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.441 [2024-11-20 11:23:16.042632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.441 BaseBdev1 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.441 [ 00:11:08.441 { 00:11:08.441 "name": "BaseBdev1", 00:11:08.441 "aliases": [ 00:11:08.441 "5a0666cd-57e2-43e6-ad17-8c35374e9452" 00:11:08.441 ], 00:11:08.441 "product_name": "Malloc disk", 00:11:08.441 "block_size": 512, 00:11:08.441 "num_blocks": 65536, 00:11:08.441 "uuid": "5a0666cd-57e2-43e6-ad17-8c35374e9452", 00:11:08.441 "assigned_rate_limits": { 00:11:08.441 "rw_ios_per_sec": 0, 00:11:08.441 "rw_mbytes_per_sec": 0, 00:11:08.441 "r_mbytes_per_sec": 0, 00:11:08.441 "w_mbytes_per_sec": 0 00:11:08.441 }, 00:11:08.441 "claimed": true, 00:11:08.441 "claim_type": "exclusive_write", 00:11:08.441 "zoned": false, 00:11:08.441 "supported_io_types": { 00:11:08.441 "read": true, 00:11:08.441 "write": true, 00:11:08.441 "unmap": true, 00:11:08.441 "flush": true, 00:11:08.441 "reset": true, 00:11:08.441 "nvme_admin": false, 00:11:08.441 "nvme_io": false, 00:11:08.441 "nvme_io_md": false, 00:11:08.441 "write_zeroes": true, 00:11:08.441 "zcopy": true, 00:11:08.441 "get_zone_info": false, 00:11:08.441 "zone_management": false, 00:11:08.441 "zone_append": false, 00:11:08.441 "compare": false, 00:11:08.441 "compare_and_write": false, 00:11:08.441 "abort": true, 00:11:08.441 "seek_hole": false, 00:11:08.441 "seek_data": false, 00:11:08.441 "copy": true, 00:11:08.441 "nvme_iov_md": false 00:11:08.441 }, 00:11:08.441 "memory_domains": [ 00:11:08.441 { 00:11:08.441 "dma_device_id": "system", 00:11:08.441 "dma_device_type": 1 00:11:08.441 }, 00:11:08.441 { 00:11:08.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.441 "dma_device_type": 2 00:11:08.441 } 00:11:08.441 ], 00:11:08.441 "driver_specific": {} 00:11:08.441 } 00:11:08.441 ] 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.441 "name": "Existed_Raid", 00:11:08.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.441 "strip_size_kb": 64, 00:11:08.441 "state": "configuring", 00:11:08.441 "raid_level": "concat", 00:11:08.441 "superblock": false, 00:11:08.441 "num_base_bdevs": 2, 00:11:08.441 "num_base_bdevs_discovered": 1, 00:11:08.441 "num_base_bdevs_operational": 2, 00:11:08.441 "base_bdevs_list": [ 00:11:08.441 { 00:11:08.441 "name": "BaseBdev1", 00:11:08.441 "uuid": "5a0666cd-57e2-43e6-ad17-8c35374e9452", 00:11:08.441 "is_configured": true, 00:11:08.441 "data_offset": 0, 00:11:08.441 "data_size": 65536 00:11:08.441 }, 00:11:08.441 { 00:11:08.441 "name": "BaseBdev2", 00:11:08.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.441 "is_configured": false, 00:11:08.441 "data_offset": 0, 00:11:08.441 "data_size": 0 00:11:08.441 } 00:11:08.441 ] 00:11:08.441 }' 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.441 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.007 [2024-11-20 11:23:16.602906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.007 [2024-11-20 11:23:16.602970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.007 [2024-11-20 11:23:16.610944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.007 [2024-11-20 11:23:16.613561] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.007 [2024-11-20 11:23:16.613632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.008 "name": "Existed_Raid", 00:11:09.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.008 "strip_size_kb": 64, 00:11:09.008 "state": "configuring", 00:11:09.008 "raid_level": "concat", 00:11:09.008 "superblock": false, 00:11:09.008 "num_base_bdevs": 2, 00:11:09.008 "num_base_bdevs_discovered": 1, 00:11:09.008 "num_base_bdevs_operational": 2, 00:11:09.008 "base_bdevs_list": [ 00:11:09.008 { 00:11:09.008 "name": "BaseBdev1", 00:11:09.008 "uuid": "5a0666cd-57e2-43e6-ad17-8c35374e9452", 00:11:09.008 "is_configured": true, 00:11:09.008 "data_offset": 0, 00:11:09.008 "data_size": 65536 00:11:09.008 }, 00:11:09.008 { 00:11:09.008 "name": "BaseBdev2", 00:11:09.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.008 "is_configured": false, 00:11:09.008 "data_offset": 0, 00:11:09.008 "data_size": 0 00:11:09.008 } 00:11:09.008 ] 00:11:09.008 }' 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.008 11:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 [2024-11-20 11:23:17.193016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.575 [2024-11-20 11:23:17.193075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.575 [2024-11-20 11:23:17.193089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:09.575 [2024-11-20 11:23:17.193468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:09.575 [2024-11-20 11:23:17.193837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.575 [2024-11-20 11:23:17.193861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:09.575 [2024-11-20 11:23:17.194220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.575 BaseBdev2 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.575 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.575 [ 00:11:09.575 { 00:11:09.575 "name": "BaseBdev2", 00:11:09.575 "aliases": [ 00:11:09.575 "e3678aaf-f698-454d-af1d-79fd4277681f" 00:11:09.575 ], 00:11:09.575 "product_name": "Malloc disk", 00:11:09.575 "block_size": 512, 00:11:09.575 "num_blocks": 65536, 00:11:09.575 "uuid": "e3678aaf-f698-454d-af1d-79fd4277681f", 00:11:09.575 "assigned_rate_limits": { 00:11:09.575 "rw_ios_per_sec": 0, 00:11:09.575 "rw_mbytes_per_sec": 0, 00:11:09.575 "r_mbytes_per_sec": 0, 00:11:09.575 "w_mbytes_per_sec": 0 00:11:09.575 }, 00:11:09.575 "claimed": true, 00:11:09.575 "claim_type": "exclusive_write", 00:11:09.575 "zoned": false, 00:11:09.575 "supported_io_types": { 00:11:09.575 "read": true, 00:11:09.575 "write": true, 00:11:09.575 "unmap": true, 00:11:09.575 "flush": true, 00:11:09.575 "reset": true, 00:11:09.575 "nvme_admin": false, 00:11:09.575 "nvme_io": false, 00:11:09.575 "nvme_io_md": false, 00:11:09.575 "write_zeroes": true, 00:11:09.575 "zcopy": true, 00:11:09.575 "get_zone_info": false, 00:11:09.575 "zone_management": false, 00:11:09.575 "zone_append": false, 00:11:09.575 "compare": false, 00:11:09.575 "compare_and_write": false, 00:11:09.575 "abort": true, 00:11:09.575 "seek_hole": false, 00:11:09.575 "seek_data": false, 00:11:09.575 "copy": true, 00:11:09.575 "nvme_iov_md": false 00:11:09.575 }, 00:11:09.575 "memory_domains": [ 00:11:09.576 { 00:11:09.576 "dma_device_id": "system", 00:11:09.576 "dma_device_type": 1 00:11:09.576 }, 00:11:09.576 { 00:11:09.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.576 "dma_device_type": 2 00:11:09.576 } 00:11:09.576 ], 00:11:09.576 "driver_specific": {} 00:11:09.576 } 00:11:09.576 ] 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.576 "name": "Existed_Raid", 00:11:09.576 "uuid": "e868a11f-32d5-4c33-8be1-b0e10f964b74", 00:11:09.576 "strip_size_kb": 64, 00:11:09.576 "state": "online", 00:11:09.576 "raid_level": "concat", 00:11:09.576 "superblock": false, 00:11:09.576 "num_base_bdevs": 2, 00:11:09.576 "num_base_bdevs_discovered": 2, 00:11:09.576 "num_base_bdevs_operational": 2, 00:11:09.576 "base_bdevs_list": [ 00:11:09.576 { 00:11:09.576 "name": "BaseBdev1", 00:11:09.576 "uuid": "5a0666cd-57e2-43e6-ad17-8c35374e9452", 00:11:09.576 "is_configured": true, 00:11:09.576 "data_offset": 0, 00:11:09.576 "data_size": 65536 00:11:09.576 }, 00:11:09.576 { 00:11:09.576 "name": "BaseBdev2", 00:11:09.576 "uuid": "e3678aaf-f698-454d-af1d-79fd4277681f", 00:11:09.576 "is_configured": true, 00:11:09.576 "data_offset": 0, 00:11:09.576 "data_size": 65536 00:11:09.576 } 00:11:09.576 ] 00:11:09.576 }' 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.576 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.143 [2024-11-20 11:23:17.761600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.143 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.143 "name": "Existed_Raid", 00:11:10.143 "aliases": [ 00:11:10.143 "e868a11f-32d5-4c33-8be1-b0e10f964b74" 00:11:10.143 ], 00:11:10.143 "product_name": "Raid Volume", 00:11:10.143 "block_size": 512, 00:11:10.143 "num_blocks": 131072, 00:11:10.143 "uuid": "e868a11f-32d5-4c33-8be1-b0e10f964b74", 00:11:10.143 "assigned_rate_limits": { 00:11:10.143 "rw_ios_per_sec": 0, 00:11:10.143 "rw_mbytes_per_sec": 0, 00:11:10.143 "r_mbytes_per_sec": 0, 00:11:10.143 "w_mbytes_per_sec": 0 00:11:10.143 }, 00:11:10.143 "claimed": false, 00:11:10.143 "zoned": false, 00:11:10.143 "supported_io_types": { 00:11:10.143 "read": true, 00:11:10.143 "write": true, 00:11:10.143 "unmap": true, 00:11:10.143 "flush": true, 00:11:10.143 "reset": true, 00:11:10.143 "nvme_admin": false, 00:11:10.143 "nvme_io": false, 00:11:10.143 "nvme_io_md": false, 00:11:10.143 "write_zeroes": true, 00:11:10.143 "zcopy": false, 00:11:10.143 "get_zone_info": false, 00:11:10.143 "zone_management": false, 00:11:10.143 "zone_append": false, 00:11:10.143 "compare": false, 00:11:10.143 "compare_and_write": false, 00:11:10.143 "abort": false, 00:11:10.143 "seek_hole": false, 00:11:10.143 "seek_data": false, 00:11:10.143 "copy": false, 00:11:10.143 "nvme_iov_md": false 00:11:10.143 }, 00:11:10.143 "memory_domains": [ 00:11:10.143 { 00:11:10.143 "dma_device_id": "system", 00:11:10.143 "dma_device_type": 1 00:11:10.143 }, 00:11:10.143 { 00:11:10.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.143 "dma_device_type": 2 00:11:10.143 }, 00:11:10.143 { 00:11:10.143 "dma_device_id": "system", 00:11:10.143 "dma_device_type": 1 00:11:10.143 }, 00:11:10.144 { 00:11:10.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.144 "dma_device_type": 2 00:11:10.144 } 00:11:10.144 ], 00:11:10.144 "driver_specific": { 00:11:10.144 "raid": { 00:11:10.144 "uuid": "e868a11f-32d5-4c33-8be1-b0e10f964b74", 00:11:10.144 "strip_size_kb": 64, 00:11:10.144 "state": "online", 00:11:10.144 "raid_level": "concat", 00:11:10.144 "superblock": false, 00:11:10.144 "num_base_bdevs": 2, 00:11:10.144 "num_base_bdevs_discovered": 2, 00:11:10.144 "num_base_bdevs_operational": 2, 00:11:10.144 "base_bdevs_list": [ 00:11:10.144 { 00:11:10.144 "name": "BaseBdev1", 00:11:10.144 "uuid": "5a0666cd-57e2-43e6-ad17-8c35374e9452", 00:11:10.144 "is_configured": true, 00:11:10.144 "data_offset": 0, 00:11:10.144 "data_size": 65536 00:11:10.144 }, 00:11:10.144 { 00:11:10.144 "name": "BaseBdev2", 00:11:10.144 "uuid": "e3678aaf-f698-454d-af1d-79fd4277681f", 00:11:10.144 "is_configured": true, 00:11:10.144 "data_offset": 0, 00:11:10.144 "data_size": 65536 00:11:10.144 } 00:11:10.144 ] 00:11:10.144 } 00:11:10.144 } 00:11:10.144 }' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:10.144 BaseBdev2' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.144 11:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.403 [2024-11-20 11:23:18.025367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.403 [2024-11-20 11:23:18.025412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.403 [2024-11-20 11:23:18.025487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.403 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.404 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.404 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.404 "name": "Existed_Raid", 00:11:10.404 "uuid": "e868a11f-32d5-4c33-8be1-b0e10f964b74", 00:11:10.404 "strip_size_kb": 64, 00:11:10.404 "state": "offline", 00:11:10.404 "raid_level": "concat", 00:11:10.404 "superblock": false, 00:11:10.404 "num_base_bdevs": 2, 00:11:10.404 "num_base_bdevs_discovered": 1, 00:11:10.404 "num_base_bdevs_operational": 1, 00:11:10.404 "base_bdevs_list": [ 00:11:10.404 { 00:11:10.404 "name": null, 00:11:10.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.404 "is_configured": false, 00:11:10.404 "data_offset": 0, 00:11:10.404 "data_size": 65536 00:11:10.404 }, 00:11:10.404 { 00:11:10.404 "name": "BaseBdev2", 00:11:10.404 "uuid": "e3678aaf-f698-454d-af1d-79fd4277681f", 00:11:10.404 "is_configured": true, 00:11:10.404 "data_offset": 0, 00:11:10.404 "data_size": 65536 00:11:10.404 } 00:11:10.404 ] 00:11:10.404 }' 00:11:10.404 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.404 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.971 [2024-11-20 11:23:18.670649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.971 [2024-11-20 11:23:18.670717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.971 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61581 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61581 ']' 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61581 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61581 00:11:11.229 killing process with pid 61581 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61581' 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61581 00:11:11.229 [2024-11-20 11:23:18.848640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.229 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61581 00:11:11.229 [2024-11-20 11:23:18.863741] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.161 ************************************ 00:11:12.161 END TEST raid_state_function_test 00:11:12.161 ************************************ 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.161 00:11:12.161 real 0m5.585s 00:11:12.161 user 0m8.482s 00:11:12.161 sys 0m0.751s 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.161 11:23:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:12.161 11:23:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.161 11:23:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.161 11:23:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.161 ************************************ 00:11:12.161 START TEST raid_state_function_test_sb 00:11:12.161 ************************************ 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.161 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:12.162 Process raid pid: 61840 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61840 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61840' 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61840 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61840 ']' 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.162 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.420 [2024-11-20 11:23:20.092156] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:12.420 [2024-11-20 11:23:20.092790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.679 [2024-11-20 11:23:20.282185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.679 [2024-11-20 11:23:20.424164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.935 [2024-11-20 11:23:20.679584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.935 [2024-11-20 11:23:20.679662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.500 [2024-11-20 11:23:21.141179] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.500 [2024-11-20 11:23:21.141463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.500 [2024-11-20 11:23:21.141491] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.500 [2024-11-20 11:23:21.141509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.500 "name": "Existed_Raid", 00:11:13.500 "uuid": "ca55c1c7-490f-4914-ba4f-0a93c25268e5", 00:11:13.500 "strip_size_kb": 64, 00:11:13.500 "state": "configuring", 00:11:13.500 "raid_level": "concat", 00:11:13.500 "superblock": true, 00:11:13.500 "num_base_bdevs": 2, 00:11:13.500 "num_base_bdevs_discovered": 0, 00:11:13.500 "num_base_bdevs_operational": 2, 00:11:13.500 "base_bdevs_list": [ 00:11:13.500 { 00:11:13.500 "name": "BaseBdev1", 00:11:13.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.500 "is_configured": false, 00:11:13.500 "data_offset": 0, 00:11:13.500 "data_size": 0 00:11:13.500 }, 00:11:13.500 { 00:11:13.500 "name": "BaseBdev2", 00:11:13.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.500 "is_configured": false, 00:11:13.500 "data_offset": 0, 00:11:13.500 "data_size": 0 00:11:13.500 } 00:11:13.500 ] 00:11:13.500 }' 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.500 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 [2024-11-20 11:23:21.677304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.069 [2024-11-20 11:23:21.677493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 [2024-11-20 11:23:21.685294] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.069 [2024-11-20 11:23:21.685364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.069 [2024-11-20 11:23:21.685380] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.069 [2024-11-20 11:23:21.685398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 [2024-11-20 11:23:21.731707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.069 BaseBdev1 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.069 [ 00:11:14.069 { 00:11:14.069 "name": "BaseBdev1", 00:11:14.069 "aliases": [ 00:11:14.069 "d7920cbf-ea00-48e7-9fba-976c779bd321" 00:11:14.069 ], 00:11:14.069 "product_name": "Malloc disk", 00:11:14.069 "block_size": 512, 00:11:14.069 "num_blocks": 65536, 00:11:14.069 "uuid": "d7920cbf-ea00-48e7-9fba-976c779bd321", 00:11:14.069 "assigned_rate_limits": { 00:11:14.069 "rw_ios_per_sec": 0, 00:11:14.069 "rw_mbytes_per_sec": 0, 00:11:14.069 "r_mbytes_per_sec": 0, 00:11:14.069 "w_mbytes_per_sec": 0 00:11:14.069 }, 00:11:14.069 "claimed": true, 00:11:14.069 "claim_type": "exclusive_write", 00:11:14.069 "zoned": false, 00:11:14.069 "supported_io_types": { 00:11:14.069 "read": true, 00:11:14.069 "write": true, 00:11:14.069 "unmap": true, 00:11:14.069 "flush": true, 00:11:14.069 "reset": true, 00:11:14.069 "nvme_admin": false, 00:11:14.069 "nvme_io": false, 00:11:14.069 "nvme_io_md": false, 00:11:14.069 "write_zeroes": true, 00:11:14.069 "zcopy": true, 00:11:14.069 "get_zone_info": false, 00:11:14.069 "zone_management": false, 00:11:14.069 "zone_append": false, 00:11:14.069 "compare": false, 00:11:14.069 "compare_and_write": false, 00:11:14.069 "abort": true, 00:11:14.069 "seek_hole": false, 00:11:14.069 "seek_data": false, 00:11:14.069 "copy": true, 00:11:14.069 "nvme_iov_md": false 00:11:14.069 }, 00:11:14.069 "memory_domains": [ 00:11:14.069 { 00:11:14.069 "dma_device_id": "system", 00:11:14.069 "dma_device_type": 1 00:11:14.069 }, 00:11:14.069 { 00:11:14.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.069 "dma_device_type": 2 00:11:14.069 } 00:11:14.069 ], 00:11:14.069 "driver_specific": {} 00:11:14.069 } 00:11:14.069 ] 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.069 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.070 "name": "Existed_Raid", 00:11:14.070 "uuid": "ca60c417-e66f-4c7a-83e9-b5661a432dd7", 00:11:14.070 "strip_size_kb": 64, 00:11:14.070 "state": "configuring", 00:11:14.070 "raid_level": "concat", 00:11:14.070 "superblock": true, 00:11:14.070 "num_base_bdevs": 2, 00:11:14.070 "num_base_bdevs_discovered": 1, 00:11:14.070 "num_base_bdevs_operational": 2, 00:11:14.070 "base_bdevs_list": [ 00:11:14.070 { 00:11:14.070 "name": "BaseBdev1", 00:11:14.070 "uuid": "d7920cbf-ea00-48e7-9fba-976c779bd321", 00:11:14.070 "is_configured": true, 00:11:14.070 "data_offset": 2048, 00:11:14.070 "data_size": 63488 00:11:14.070 }, 00:11:14.070 { 00:11:14.070 "name": "BaseBdev2", 00:11:14.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.070 "is_configured": false, 00:11:14.070 "data_offset": 0, 00:11:14.070 "data_size": 0 00:11:14.070 } 00:11:14.070 ] 00:11:14.070 }' 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.070 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.636 [2024-11-20 11:23:22.291942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.636 [2024-11-20 11:23:22.292222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.636 [2024-11-20 11:23:22.303982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.636 [2024-11-20 11:23:22.306495] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.636 [2024-11-20 11:23:22.306553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.636 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.637 "name": "Existed_Raid", 00:11:14.637 "uuid": "c3fbacd8-5f99-4802-96ab-8f0dbb52b0a7", 00:11:14.637 "strip_size_kb": 64, 00:11:14.637 "state": "configuring", 00:11:14.637 "raid_level": "concat", 00:11:14.637 "superblock": true, 00:11:14.637 "num_base_bdevs": 2, 00:11:14.637 "num_base_bdevs_discovered": 1, 00:11:14.637 "num_base_bdevs_operational": 2, 00:11:14.637 "base_bdevs_list": [ 00:11:14.637 { 00:11:14.637 "name": "BaseBdev1", 00:11:14.637 "uuid": "d7920cbf-ea00-48e7-9fba-976c779bd321", 00:11:14.637 "is_configured": true, 00:11:14.637 "data_offset": 2048, 00:11:14.637 "data_size": 63488 00:11:14.637 }, 00:11:14.637 { 00:11:14.637 "name": "BaseBdev2", 00:11:14.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.637 "is_configured": false, 00:11:14.637 "data_offset": 0, 00:11:14.637 "data_size": 0 00:11:14.637 } 00:11:14.637 ] 00:11:14.637 }' 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.637 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.211 [2024-11-20 11:23:22.878780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.211 [2024-11-20 11:23:22.879119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.211 [2024-11-20 11:23:22.879143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:15.211 BaseBdev2 00:11:15.211 [2024-11-20 11:23:22.879555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:15.211 [2024-11-20 11:23:22.879815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.211 [2024-11-20 11:23:22.879862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:15.211 [2024-11-20 11:23:22.880080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.211 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.211 [ 00:11:15.211 { 00:11:15.211 "name": "BaseBdev2", 00:11:15.211 "aliases": [ 00:11:15.211 "070ccce9-a51d-4dc7-9a2c-d45714c367d4" 00:11:15.211 ], 00:11:15.211 "product_name": "Malloc disk", 00:11:15.211 "block_size": 512, 00:11:15.211 "num_blocks": 65536, 00:11:15.211 "uuid": "070ccce9-a51d-4dc7-9a2c-d45714c367d4", 00:11:15.211 "assigned_rate_limits": { 00:11:15.211 "rw_ios_per_sec": 0, 00:11:15.211 "rw_mbytes_per_sec": 0, 00:11:15.211 "r_mbytes_per_sec": 0, 00:11:15.211 "w_mbytes_per_sec": 0 00:11:15.212 }, 00:11:15.212 "claimed": true, 00:11:15.212 "claim_type": "exclusive_write", 00:11:15.212 "zoned": false, 00:11:15.212 "supported_io_types": { 00:11:15.212 "read": true, 00:11:15.212 "write": true, 00:11:15.212 "unmap": true, 00:11:15.212 "flush": true, 00:11:15.212 "reset": true, 00:11:15.212 "nvme_admin": false, 00:11:15.212 "nvme_io": false, 00:11:15.212 "nvme_io_md": false, 00:11:15.212 "write_zeroes": true, 00:11:15.212 "zcopy": true, 00:11:15.212 "get_zone_info": false, 00:11:15.212 "zone_management": false, 00:11:15.212 "zone_append": false, 00:11:15.212 "compare": false, 00:11:15.212 "compare_and_write": false, 00:11:15.212 "abort": true, 00:11:15.212 "seek_hole": false, 00:11:15.212 "seek_data": false, 00:11:15.212 "copy": true, 00:11:15.212 "nvme_iov_md": false 00:11:15.212 }, 00:11:15.212 "memory_domains": [ 00:11:15.212 { 00:11:15.212 "dma_device_id": "system", 00:11:15.212 "dma_device_type": 1 00:11:15.212 }, 00:11:15.212 { 00:11:15.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.212 "dma_device_type": 2 00:11:15.212 } 00:11:15.212 ], 00:11:15.212 "driver_specific": {} 00:11:15.212 } 00:11:15.212 ] 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.212 "name": "Existed_Raid", 00:11:15.212 "uuid": "c3fbacd8-5f99-4802-96ab-8f0dbb52b0a7", 00:11:15.212 "strip_size_kb": 64, 00:11:15.212 "state": "online", 00:11:15.212 "raid_level": "concat", 00:11:15.212 "superblock": true, 00:11:15.212 "num_base_bdevs": 2, 00:11:15.212 "num_base_bdevs_discovered": 2, 00:11:15.212 "num_base_bdevs_operational": 2, 00:11:15.212 "base_bdevs_list": [ 00:11:15.212 { 00:11:15.212 "name": "BaseBdev1", 00:11:15.212 "uuid": "d7920cbf-ea00-48e7-9fba-976c779bd321", 00:11:15.212 "is_configured": true, 00:11:15.212 "data_offset": 2048, 00:11:15.212 "data_size": 63488 00:11:15.212 }, 00:11:15.212 { 00:11:15.212 "name": "BaseBdev2", 00:11:15.212 "uuid": "070ccce9-a51d-4dc7-9a2c-d45714c367d4", 00:11:15.212 "is_configured": true, 00:11:15.212 "data_offset": 2048, 00:11:15.212 "data_size": 63488 00:11:15.212 } 00:11:15.212 ] 00:11:15.212 }' 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.212 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.779 [2024-11-20 11:23:23.431372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.779 "name": "Existed_Raid", 00:11:15.779 "aliases": [ 00:11:15.779 "c3fbacd8-5f99-4802-96ab-8f0dbb52b0a7" 00:11:15.779 ], 00:11:15.779 "product_name": "Raid Volume", 00:11:15.779 "block_size": 512, 00:11:15.779 "num_blocks": 126976, 00:11:15.779 "uuid": "c3fbacd8-5f99-4802-96ab-8f0dbb52b0a7", 00:11:15.779 "assigned_rate_limits": { 00:11:15.779 "rw_ios_per_sec": 0, 00:11:15.779 "rw_mbytes_per_sec": 0, 00:11:15.779 "r_mbytes_per_sec": 0, 00:11:15.779 "w_mbytes_per_sec": 0 00:11:15.779 }, 00:11:15.779 "claimed": false, 00:11:15.779 "zoned": false, 00:11:15.779 "supported_io_types": { 00:11:15.779 "read": true, 00:11:15.779 "write": true, 00:11:15.779 "unmap": true, 00:11:15.779 "flush": true, 00:11:15.779 "reset": true, 00:11:15.779 "nvme_admin": false, 00:11:15.779 "nvme_io": false, 00:11:15.779 "nvme_io_md": false, 00:11:15.779 "write_zeroes": true, 00:11:15.779 "zcopy": false, 00:11:15.779 "get_zone_info": false, 00:11:15.779 "zone_management": false, 00:11:15.779 "zone_append": false, 00:11:15.779 "compare": false, 00:11:15.779 "compare_and_write": false, 00:11:15.779 "abort": false, 00:11:15.779 "seek_hole": false, 00:11:15.779 "seek_data": false, 00:11:15.779 "copy": false, 00:11:15.779 "nvme_iov_md": false 00:11:15.779 }, 00:11:15.779 "memory_domains": [ 00:11:15.779 { 00:11:15.779 "dma_device_id": "system", 00:11:15.779 "dma_device_type": 1 00:11:15.779 }, 00:11:15.779 { 00:11:15.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.779 "dma_device_type": 2 00:11:15.779 }, 00:11:15.779 { 00:11:15.779 "dma_device_id": "system", 00:11:15.779 "dma_device_type": 1 00:11:15.779 }, 00:11:15.779 { 00:11:15.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.779 "dma_device_type": 2 00:11:15.779 } 00:11:15.779 ], 00:11:15.779 "driver_specific": { 00:11:15.779 "raid": { 00:11:15.779 "uuid": "c3fbacd8-5f99-4802-96ab-8f0dbb52b0a7", 00:11:15.779 "strip_size_kb": 64, 00:11:15.779 "state": "online", 00:11:15.779 "raid_level": "concat", 00:11:15.779 "superblock": true, 00:11:15.779 "num_base_bdevs": 2, 00:11:15.779 "num_base_bdevs_discovered": 2, 00:11:15.779 "num_base_bdevs_operational": 2, 00:11:15.779 "base_bdevs_list": [ 00:11:15.779 { 00:11:15.779 "name": "BaseBdev1", 00:11:15.779 "uuid": "d7920cbf-ea00-48e7-9fba-976c779bd321", 00:11:15.779 "is_configured": true, 00:11:15.779 "data_offset": 2048, 00:11:15.779 "data_size": 63488 00:11:15.779 }, 00:11:15.779 { 00:11:15.779 "name": "BaseBdev2", 00:11:15.779 "uuid": "070ccce9-a51d-4dc7-9a2c-d45714c367d4", 00:11:15.779 "is_configured": true, 00:11:15.779 "data_offset": 2048, 00:11:15.779 "data_size": 63488 00:11:15.779 } 00:11:15.779 ] 00:11:15.779 } 00:11:15.779 } 00:11:15.779 }' 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.779 BaseBdev2' 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.779 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.780 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.095 [2024-11-20 11:23:23.695149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.095 [2024-11-20 11:23:23.695196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.095 [2024-11-20 11:23:23.695262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.095 "name": "Existed_Raid", 00:11:16.095 "uuid": "c3fbacd8-5f99-4802-96ab-8f0dbb52b0a7", 00:11:16.095 "strip_size_kb": 64, 00:11:16.095 "state": "offline", 00:11:16.095 "raid_level": "concat", 00:11:16.095 "superblock": true, 00:11:16.095 "num_base_bdevs": 2, 00:11:16.095 "num_base_bdevs_discovered": 1, 00:11:16.095 "num_base_bdevs_operational": 1, 00:11:16.095 "base_bdevs_list": [ 00:11:16.095 { 00:11:16.095 "name": null, 00:11:16.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.095 "is_configured": false, 00:11:16.095 "data_offset": 0, 00:11:16.095 "data_size": 63488 00:11:16.095 }, 00:11:16.095 { 00:11:16.095 "name": "BaseBdev2", 00:11:16.095 "uuid": "070ccce9-a51d-4dc7-9a2c-d45714c367d4", 00:11:16.095 "is_configured": true, 00:11:16.095 "data_offset": 2048, 00:11:16.095 "data_size": 63488 00:11:16.095 } 00:11:16.095 ] 00:11:16.095 }' 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.095 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.662 [2024-11-20 11:23:24.377545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.662 [2024-11-20 11:23:24.377635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.662 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61840 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61840 ']' 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61840 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61840 00:11:16.920 killing process with pid 61840 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61840' 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61840 00:11:16.920 [2024-11-20 11:23:24.566071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.920 11:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61840 00:11:16.920 [2024-11-20 11:23:24.581879] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.854 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:17.854 00:11:17.854 real 0m5.724s 00:11:17.854 user 0m8.628s 00:11:17.854 sys 0m0.809s 00:11:17.854 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.854 ************************************ 00:11:17.854 END TEST raid_state_function_test_sb 00:11:17.854 ************************************ 00:11:17.854 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.113 11:23:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:18.113 11:23:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:18.113 11:23:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.113 11:23:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.113 ************************************ 00:11:18.113 START TEST raid_superblock_test 00:11:18.113 ************************************ 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62097 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62097 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62097 ']' 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.113 11:23:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.113 [2024-11-20 11:23:25.861248] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:18.113 [2024-11-20 11:23:25.861415] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62097 ] 00:11:18.372 [2024-11-20 11:23:26.055459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.700 [2024-11-20 11:23:26.229183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.700 [2024-11-20 11:23:26.492503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.700 [2024-11-20 11:23:26.492564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.266 11:23:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.266 malloc1 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.266 [2024-11-20 11:23:27.010247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:19.266 [2024-11-20 11:23:27.010507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.266 [2024-11-20 11:23:27.010703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:19.266 [2024-11-20 11:23:27.010853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.266 [2024-11-20 11:23:27.014123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.266 pt1 00:11:19.266 [2024-11-20 11:23:27.014319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.266 malloc2 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.266 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.266 [2024-11-20 11:23:27.065956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:19.266 [2024-11-20 11:23:27.066052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.267 [2024-11-20 11:23:27.066092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:19.267 [2024-11-20 11:23:27.066112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.267 [2024-11-20 11:23:27.069370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.267 pt2 00:11:19.267 [2024-11-20 11:23:27.069593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.267 [2024-11-20 11:23:27.074019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:19.267 [2024-11-20 11:23:27.076475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:19.267 [2024-11-20 11:23:27.076860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:19.267 [2024-11-20 11:23:27.077023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:19.267 [2024-11-20 11:23:27.077509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:19.267 [2024-11-20 11:23:27.077903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:19.267 [2024-11-20 11:23:27.078063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:19.267 [2024-11-20 11:23:27.078526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.267 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.525 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.525 "name": "raid_bdev1", 00:11:19.525 "uuid": "34d3b041-fcce-44b5-8e67-fc472abdb1dd", 00:11:19.525 "strip_size_kb": 64, 00:11:19.525 "state": "online", 00:11:19.525 "raid_level": "concat", 00:11:19.525 "superblock": true, 00:11:19.525 "num_base_bdevs": 2, 00:11:19.525 "num_base_bdevs_discovered": 2, 00:11:19.525 "num_base_bdevs_operational": 2, 00:11:19.525 "base_bdevs_list": [ 00:11:19.525 { 00:11:19.525 "name": "pt1", 00:11:19.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.526 "is_configured": true, 00:11:19.526 "data_offset": 2048, 00:11:19.526 "data_size": 63488 00:11:19.526 }, 00:11:19.526 { 00:11:19.526 "name": "pt2", 00:11:19.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.526 "is_configured": true, 00:11:19.526 "data_offset": 2048, 00:11:19.526 "data_size": 63488 00:11:19.526 } 00:11:19.526 ] 00:11:19.526 }' 00:11:19.526 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.526 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.784 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.043 [2024-11-20 11:23:27.627087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.043 "name": "raid_bdev1", 00:11:20.043 "aliases": [ 00:11:20.043 "34d3b041-fcce-44b5-8e67-fc472abdb1dd" 00:11:20.043 ], 00:11:20.043 "product_name": "Raid Volume", 00:11:20.043 "block_size": 512, 00:11:20.043 "num_blocks": 126976, 00:11:20.043 "uuid": "34d3b041-fcce-44b5-8e67-fc472abdb1dd", 00:11:20.043 "assigned_rate_limits": { 00:11:20.043 "rw_ios_per_sec": 0, 00:11:20.043 "rw_mbytes_per_sec": 0, 00:11:20.043 "r_mbytes_per_sec": 0, 00:11:20.043 "w_mbytes_per_sec": 0 00:11:20.043 }, 00:11:20.043 "claimed": false, 00:11:20.043 "zoned": false, 00:11:20.043 "supported_io_types": { 00:11:20.043 "read": true, 00:11:20.043 "write": true, 00:11:20.043 "unmap": true, 00:11:20.043 "flush": true, 00:11:20.043 "reset": true, 00:11:20.043 "nvme_admin": false, 00:11:20.043 "nvme_io": false, 00:11:20.043 "nvme_io_md": false, 00:11:20.043 "write_zeroes": true, 00:11:20.043 "zcopy": false, 00:11:20.043 "get_zone_info": false, 00:11:20.043 "zone_management": false, 00:11:20.043 "zone_append": false, 00:11:20.043 "compare": false, 00:11:20.043 "compare_and_write": false, 00:11:20.043 "abort": false, 00:11:20.043 "seek_hole": false, 00:11:20.043 "seek_data": false, 00:11:20.043 "copy": false, 00:11:20.043 "nvme_iov_md": false 00:11:20.043 }, 00:11:20.043 "memory_domains": [ 00:11:20.043 { 00:11:20.043 "dma_device_id": "system", 00:11:20.043 "dma_device_type": 1 00:11:20.043 }, 00:11:20.043 { 00:11:20.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.043 "dma_device_type": 2 00:11:20.043 }, 00:11:20.043 { 00:11:20.043 "dma_device_id": "system", 00:11:20.043 "dma_device_type": 1 00:11:20.043 }, 00:11:20.043 { 00:11:20.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.043 "dma_device_type": 2 00:11:20.043 } 00:11:20.043 ], 00:11:20.043 "driver_specific": { 00:11:20.043 "raid": { 00:11:20.043 "uuid": "34d3b041-fcce-44b5-8e67-fc472abdb1dd", 00:11:20.043 "strip_size_kb": 64, 00:11:20.043 "state": "online", 00:11:20.043 "raid_level": "concat", 00:11:20.043 "superblock": true, 00:11:20.043 "num_base_bdevs": 2, 00:11:20.043 "num_base_bdevs_discovered": 2, 00:11:20.043 "num_base_bdevs_operational": 2, 00:11:20.043 "base_bdevs_list": [ 00:11:20.043 { 00:11:20.043 "name": "pt1", 00:11:20.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.043 "is_configured": true, 00:11:20.043 "data_offset": 2048, 00:11:20.043 "data_size": 63488 00:11:20.043 }, 00:11:20.043 { 00:11:20.043 "name": "pt2", 00:11:20.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.043 "is_configured": true, 00:11:20.043 "data_offset": 2048, 00:11:20.043 "data_size": 63488 00:11:20.043 } 00:11:20.043 ] 00:11:20.043 } 00:11:20.043 } 00:11:20.043 }' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:20.043 pt2' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.043 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 [2024-11-20 11:23:27.890982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=34d3b041-fcce-44b5-8e67-fc472abdb1dd 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 34d3b041-fcce-44b5-8e67-fc472abdb1dd ']' 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 [2024-11-20 11:23:27.946640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.303 [2024-11-20 11:23:27.946819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.303 [2024-11-20 11:23:27.946939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.303 [2024-11-20 11:23:27.947020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.303 [2024-11-20 11:23:27.947057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:20.303 11:23:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 [2024-11-20 11:23:28.098759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:20.303 [2024-11-20 11:23:28.101458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:20.303 [2024-11-20 11:23:28.101572] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:20.303 [2024-11-20 11:23:28.101679] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:20.303 [2024-11-20 11:23:28.101707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.303 [2024-11-20 11:23:28.101722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:20.303 request: 00:11:20.303 { 00:11:20.303 "name": "raid_bdev1", 00:11:20.303 "raid_level": "concat", 00:11:20.303 "base_bdevs": [ 00:11:20.303 "malloc1", 00:11:20.303 "malloc2" 00:11:20.303 ], 00:11:20.303 "strip_size_kb": 64, 00:11:20.303 "superblock": false, 00:11:20.303 "method": "bdev_raid_create", 00:11:20.303 "req_id": 1 00:11:20.303 } 00:11:20.303 Got JSON-RPC error response 00:11:20.303 response: 00:11:20.303 { 00:11:20.303 "code": -17, 00:11:20.303 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:20.303 } 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:20.303 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.562 [2024-11-20 11:23:28.170762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.562 [2024-11-20 11:23:28.170945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.562 [2024-11-20 11:23:28.170983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:20.562 [2024-11-20 11:23:28.171002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.562 [2024-11-20 11:23:28.174024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.562 [2024-11-20 11:23:28.174069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.562 [2024-11-20 11:23:28.174158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:20.562 [2024-11-20 11:23:28.174232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.562 pt1 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.562 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.563 "name": "raid_bdev1", 00:11:20.563 "uuid": "34d3b041-fcce-44b5-8e67-fc472abdb1dd", 00:11:20.563 "strip_size_kb": 64, 00:11:20.563 "state": "configuring", 00:11:20.563 "raid_level": "concat", 00:11:20.563 "superblock": true, 00:11:20.563 "num_base_bdevs": 2, 00:11:20.563 "num_base_bdevs_discovered": 1, 00:11:20.563 "num_base_bdevs_operational": 2, 00:11:20.563 "base_bdevs_list": [ 00:11:20.563 { 00:11:20.563 "name": "pt1", 00:11:20.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.563 "is_configured": true, 00:11:20.563 "data_offset": 2048, 00:11:20.563 "data_size": 63488 00:11:20.563 }, 00:11:20.563 { 00:11:20.563 "name": null, 00:11:20.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.563 "is_configured": false, 00:11:20.563 "data_offset": 2048, 00:11:20.563 "data_size": 63488 00:11:20.563 } 00:11:20.563 ] 00:11:20.563 }' 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.563 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.130 [2024-11-20 11:23:28.698974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.130 [2024-11-20 11:23:28.699116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.130 [2024-11-20 11:23:28.699149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:21.130 [2024-11-20 11:23:28.699168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.130 [2024-11-20 11:23:28.699776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.130 [2024-11-20 11:23:28.699825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.130 [2024-11-20 11:23:28.699928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.130 [2024-11-20 11:23:28.699977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.130 [2024-11-20 11:23:28.700120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:21.130 [2024-11-20 11:23:28.700141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:21.130 [2024-11-20 11:23:28.700434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:21.130 [2024-11-20 11:23:28.700645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:21.130 [2024-11-20 11:23:28.700664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:21.130 [2024-11-20 11:23:28.700835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.130 pt2 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.130 "name": "raid_bdev1", 00:11:21.130 "uuid": "34d3b041-fcce-44b5-8e67-fc472abdb1dd", 00:11:21.130 "strip_size_kb": 64, 00:11:21.130 "state": "online", 00:11:21.130 "raid_level": "concat", 00:11:21.130 "superblock": true, 00:11:21.130 "num_base_bdevs": 2, 00:11:21.130 "num_base_bdevs_discovered": 2, 00:11:21.130 "num_base_bdevs_operational": 2, 00:11:21.130 "base_bdevs_list": [ 00:11:21.130 { 00:11:21.130 "name": "pt1", 00:11:21.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.130 "is_configured": true, 00:11:21.130 "data_offset": 2048, 00:11:21.130 "data_size": 63488 00:11:21.130 }, 00:11:21.130 { 00:11:21.130 "name": "pt2", 00:11:21.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.130 "is_configured": true, 00:11:21.130 "data_offset": 2048, 00:11:21.130 "data_size": 63488 00:11:21.130 } 00:11:21.130 ] 00:11:21.130 }' 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.130 11:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.390 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.390 [2024-11-20 11:23:29.227518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.649 "name": "raid_bdev1", 00:11:21.649 "aliases": [ 00:11:21.649 "34d3b041-fcce-44b5-8e67-fc472abdb1dd" 00:11:21.649 ], 00:11:21.649 "product_name": "Raid Volume", 00:11:21.649 "block_size": 512, 00:11:21.649 "num_blocks": 126976, 00:11:21.649 "uuid": "34d3b041-fcce-44b5-8e67-fc472abdb1dd", 00:11:21.649 "assigned_rate_limits": { 00:11:21.649 "rw_ios_per_sec": 0, 00:11:21.649 "rw_mbytes_per_sec": 0, 00:11:21.649 "r_mbytes_per_sec": 0, 00:11:21.649 "w_mbytes_per_sec": 0 00:11:21.649 }, 00:11:21.649 "claimed": false, 00:11:21.649 "zoned": false, 00:11:21.649 "supported_io_types": { 00:11:21.649 "read": true, 00:11:21.649 "write": true, 00:11:21.649 "unmap": true, 00:11:21.649 "flush": true, 00:11:21.649 "reset": true, 00:11:21.649 "nvme_admin": false, 00:11:21.649 "nvme_io": false, 00:11:21.649 "nvme_io_md": false, 00:11:21.649 "write_zeroes": true, 00:11:21.649 "zcopy": false, 00:11:21.649 "get_zone_info": false, 00:11:21.649 "zone_management": false, 00:11:21.649 "zone_append": false, 00:11:21.649 "compare": false, 00:11:21.649 "compare_and_write": false, 00:11:21.649 "abort": false, 00:11:21.649 "seek_hole": false, 00:11:21.649 "seek_data": false, 00:11:21.649 "copy": false, 00:11:21.649 "nvme_iov_md": false 00:11:21.649 }, 00:11:21.649 "memory_domains": [ 00:11:21.649 { 00:11:21.649 "dma_device_id": "system", 00:11:21.649 "dma_device_type": 1 00:11:21.649 }, 00:11:21.649 { 00:11:21.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.649 "dma_device_type": 2 00:11:21.649 }, 00:11:21.649 { 00:11:21.649 "dma_device_id": "system", 00:11:21.649 "dma_device_type": 1 00:11:21.649 }, 00:11:21.649 { 00:11:21.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.649 "dma_device_type": 2 00:11:21.649 } 00:11:21.649 ], 00:11:21.649 "driver_specific": { 00:11:21.649 "raid": { 00:11:21.649 "uuid": "34d3b041-fcce-44b5-8e67-fc472abdb1dd", 00:11:21.649 "strip_size_kb": 64, 00:11:21.649 "state": "online", 00:11:21.649 "raid_level": "concat", 00:11:21.649 "superblock": true, 00:11:21.649 "num_base_bdevs": 2, 00:11:21.649 "num_base_bdevs_discovered": 2, 00:11:21.649 "num_base_bdevs_operational": 2, 00:11:21.649 "base_bdevs_list": [ 00:11:21.649 { 00:11:21.649 "name": "pt1", 00:11:21.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.649 "is_configured": true, 00:11:21.649 "data_offset": 2048, 00:11:21.649 "data_size": 63488 00:11:21.649 }, 00:11:21.649 { 00:11:21.649 "name": "pt2", 00:11:21.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.649 "is_configured": true, 00:11:21.649 "data_offset": 2048, 00:11:21.649 "data_size": 63488 00:11:21.649 } 00:11:21.649 ] 00:11:21.649 } 00:11:21.649 } 00:11:21.649 }' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:21.649 pt2' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:21.649 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.649 [2024-11-20 11:23:29.483536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 34d3b041-fcce-44b5-8e67-fc472abdb1dd '!=' 34d3b041-fcce-44b5-8e67-fc472abdb1dd ']' 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62097 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62097 ']' 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62097 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62097 00:11:21.909 killing process with pid 62097 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62097' 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62097 00:11:21.909 [2024-11-20 11:23:29.566977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.909 11:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62097 00:11:21.909 [2024-11-20 11:23:29.567130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.909 [2024-11-20 11:23:29.567207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.909 [2024-11-20 11:23:29.567234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:22.168 [2024-11-20 11:23:29.763233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.541 ************************************ 00:11:23.541 END TEST raid_superblock_test 00:11:23.541 ************************************ 00:11:23.541 11:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:23.541 00:11:23.541 real 0m5.214s 00:11:23.541 user 0m7.667s 00:11:23.541 sys 0m0.717s 00:11:23.541 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.541 11:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.541 11:23:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:11:23.541 11:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:23.541 11:23:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.541 11:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.541 ************************************ 00:11:23.541 START TEST raid_read_error_test 00:11:23.541 ************************************ 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:23.541 11:23:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:23.541 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:23.541 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:23.541 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:23.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AB2yBtawXn 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62320 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62320 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62320 ']' 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.542 11:23:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.542 [2024-11-20 11:23:31.097640] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:23.542 [2024-11-20 11:23:31.097853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62320 ] 00:11:23.542 [2024-11-20 11:23:31.276637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.800 [2024-11-20 11:23:31.410952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.800 [2024-11-20 11:23:31.616344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.800 [2024-11-20 11:23:31.616422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.368 BaseBdev1_malloc 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.368 true 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.368 [2024-11-20 11:23:32.111852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:24.368 [2024-11-20 11:23:32.111926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.368 [2024-11-20 11:23:32.111974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:24.368 [2024-11-20 11:23:32.111994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.368 [2024-11-20 11:23:32.115082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.368 [2024-11-20 11:23:32.115133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.368 BaseBdev1 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.368 BaseBdev2_malloc 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.368 true 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.368 [2024-11-20 11:23:32.180959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:24.368 [2024-11-20 11:23:32.181218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.368 [2024-11-20 11:23:32.181292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:24.368 [2024-11-20 11:23:32.181412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.368 [2024-11-20 11:23:32.184541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.368 [2024-11-20 11:23:32.184736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.368 BaseBdev2 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.368 [2024-11-20 11:23:32.193117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.368 [2024-11-20 11:23:32.195781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.368 [2024-11-20 11:23:32.196188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.368 [2024-11-20 11:23:32.196217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:24.368 [2024-11-20 11:23:32.196571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:24.368 [2024-11-20 11:23:32.196853] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.368 [2024-11-20 11:23:32.196875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:24.368 [2024-11-20 11:23:32.197148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.368 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.369 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.628 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.628 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.628 "name": "raid_bdev1", 00:11:24.628 "uuid": "fedf80ce-aa61-4741-b9ab-0bb911b0c8a5", 00:11:24.628 "strip_size_kb": 64, 00:11:24.628 "state": "online", 00:11:24.628 "raid_level": "concat", 00:11:24.628 "superblock": true, 00:11:24.628 "num_base_bdevs": 2, 00:11:24.628 "num_base_bdevs_discovered": 2, 00:11:24.628 "num_base_bdevs_operational": 2, 00:11:24.628 "base_bdevs_list": [ 00:11:24.628 { 00:11:24.628 "name": "BaseBdev1", 00:11:24.628 "uuid": "d8eb5397-654c-5009-8d93-cb6f5e27a504", 00:11:24.628 "is_configured": true, 00:11:24.628 "data_offset": 2048, 00:11:24.628 "data_size": 63488 00:11:24.628 }, 00:11:24.628 { 00:11:24.628 "name": "BaseBdev2", 00:11:24.628 "uuid": "6729a51f-6aa9-5ed2-9aeb-75d32a2457a1", 00:11:24.628 "is_configured": true, 00:11:24.628 "data_offset": 2048, 00:11:24.628 "data_size": 63488 00:11:24.628 } 00:11:24.628 ] 00:11:24.628 }' 00:11:24.628 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.628 11:23:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.887 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.887 11:23:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:25.145 [2024-11-20 11:23:32.806671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.081 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.082 "name": "raid_bdev1", 00:11:26.082 "uuid": "fedf80ce-aa61-4741-b9ab-0bb911b0c8a5", 00:11:26.082 "strip_size_kb": 64, 00:11:26.082 "state": "online", 00:11:26.082 "raid_level": "concat", 00:11:26.082 "superblock": true, 00:11:26.082 "num_base_bdevs": 2, 00:11:26.082 "num_base_bdevs_discovered": 2, 00:11:26.082 "num_base_bdevs_operational": 2, 00:11:26.082 "base_bdevs_list": [ 00:11:26.082 { 00:11:26.082 "name": "BaseBdev1", 00:11:26.082 "uuid": "d8eb5397-654c-5009-8d93-cb6f5e27a504", 00:11:26.082 "is_configured": true, 00:11:26.082 "data_offset": 2048, 00:11:26.082 "data_size": 63488 00:11:26.082 }, 00:11:26.082 { 00:11:26.082 "name": "BaseBdev2", 00:11:26.082 "uuid": "6729a51f-6aa9-5ed2-9aeb-75d32a2457a1", 00:11:26.082 "is_configured": true, 00:11:26.082 "data_offset": 2048, 00:11:26.082 "data_size": 63488 00:11:26.082 } 00:11:26.082 ] 00:11:26.082 }' 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.082 11:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.650 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.650 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.650 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.650 [2024-11-20 11:23:34.225684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.650 [2024-11-20 11:23:34.225728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.650 [2024-11-20 11:23:34.229116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.650 [2024-11-20 11:23:34.229168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.650 [2024-11-20 11:23:34.229222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.650 [2024-11-20 11:23:34.229257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:26.650 { 00:11:26.650 "results": [ 00:11:26.650 { 00:11:26.650 "job": "raid_bdev1", 00:11:26.650 "core_mask": "0x1", 00:11:26.650 "workload": "randrw", 00:11:26.650 "percentage": 50, 00:11:26.650 "status": "finished", 00:11:26.650 "queue_depth": 1, 00:11:26.650 "io_size": 131072, 00:11:26.650 "runtime": 1.416453, 00:11:26.650 "iops": 10550.297115400228, 00:11:26.650 "mibps": 1318.7871394250285, 00:11:26.650 "io_failed": 1, 00:11:26.650 "io_timeout": 0, 00:11:26.650 "avg_latency_us": 132.78546135831382, 00:11:26.650 "min_latency_us": 41.89090909090909, 00:11:26.650 "max_latency_us": 1869.2654545454545 00:11:26.650 } 00:11:26.650 ], 00:11:26.650 "core_count": 1 00:11:26.650 } 00:11:26.650 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.650 11:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62320 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62320 ']' 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62320 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62320 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62320' 00:11:26.651 killing process with pid 62320 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62320 00:11:26.651 [2024-11-20 11:23:34.270690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.651 11:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62320 00:11:26.651 [2024-11-20 11:23:34.396888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AB2yBtawXn 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:28.030 ************************************ 00:11:28.030 END TEST raid_read_error_test 00:11:28.030 ************************************ 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:28.030 00:11:28.030 real 0m4.524s 00:11:28.030 user 0m5.605s 00:11:28.030 sys 0m0.578s 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.030 11:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.030 11:23:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:11:28.030 11:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:28.030 11:23:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.030 11:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.030 ************************************ 00:11:28.030 START TEST raid_write_error_test 00:11:28.030 ************************************ 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RzOkt3Ko4F 00:11:28.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62460 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62460 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62460 ']' 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.030 11:23:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.030 [2024-11-20 11:23:35.702555] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:28.030 [2024-11-20 11:23:35.702762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62460 ] 00:11:28.289 [2024-11-20 11:23:35.887937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.289 [2024-11-20 11:23:36.040866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.547 [2024-11-20 11:23:36.250053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.547 [2024-11-20 11:23:36.250108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.114 BaseBdev1_malloc 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.114 true 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.114 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.114 [2024-11-20 11:23:36.725586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:29.114 [2024-11-20 11:23:36.725826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.115 [2024-11-20 11:23:36.725871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:29.115 [2024-11-20 11:23:36.725893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.115 [2024-11-20 11:23:36.728794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.115 [2024-11-20 11:23:36.729006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:29.115 BaseBdev1 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 BaseBdev2_malloc 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 true 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 [2024-11-20 11:23:36.790413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:29.115 [2024-11-20 11:23:36.790480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.115 [2024-11-20 11:23:36.790516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:29.115 [2024-11-20 11:23:36.790532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.115 [2024-11-20 11:23:36.793390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.115 [2024-11-20 11:23:36.793435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:29.115 BaseBdev2 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 [2024-11-20 11:23:36.798487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.115 [2024-11-20 11:23:36.801104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.115 [2024-11-20 11:23:36.801378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.115 [2024-11-20 11:23:36.801402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:29.115 [2024-11-20 11:23:36.801787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:29.115 [2024-11-20 11:23:36.802020] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.115 [2024-11-20 11:23:36.802040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:29.115 [2024-11-20 11:23:36.802247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.115 "name": "raid_bdev1", 00:11:29.115 "uuid": "c0158526-6a88-4a6c-8cec-83fc263d822f", 00:11:29.115 "strip_size_kb": 64, 00:11:29.115 "state": "online", 00:11:29.115 "raid_level": "concat", 00:11:29.115 "superblock": true, 00:11:29.115 "num_base_bdevs": 2, 00:11:29.115 "num_base_bdevs_discovered": 2, 00:11:29.115 "num_base_bdevs_operational": 2, 00:11:29.115 "base_bdevs_list": [ 00:11:29.115 { 00:11:29.115 "name": "BaseBdev1", 00:11:29.115 "uuid": "4c054717-482b-5138-a25d-c781d9cc7593", 00:11:29.115 "is_configured": true, 00:11:29.115 "data_offset": 2048, 00:11:29.115 "data_size": 63488 00:11:29.115 }, 00:11:29.115 { 00:11:29.115 "name": "BaseBdev2", 00:11:29.115 "uuid": "e48eea90-33ba-5903-9041-9a6e5489215d", 00:11:29.115 "is_configured": true, 00:11:29.115 "data_offset": 2048, 00:11:29.115 "data_size": 63488 00:11:29.115 } 00:11:29.115 ] 00:11:29.115 }' 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.115 11:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.681 11:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:29.681 11:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:29.681 [2024-11-20 11:23:37.488281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.616 "name": "raid_bdev1", 00:11:30.616 "uuid": "c0158526-6a88-4a6c-8cec-83fc263d822f", 00:11:30.616 "strip_size_kb": 64, 00:11:30.616 "state": "online", 00:11:30.616 "raid_level": "concat", 00:11:30.616 "superblock": true, 00:11:30.616 "num_base_bdevs": 2, 00:11:30.616 "num_base_bdevs_discovered": 2, 00:11:30.616 "num_base_bdevs_operational": 2, 00:11:30.616 "base_bdevs_list": [ 00:11:30.616 { 00:11:30.616 "name": "BaseBdev1", 00:11:30.616 "uuid": "4c054717-482b-5138-a25d-c781d9cc7593", 00:11:30.616 "is_configured": true, 00:11:30.616 "data_offset": 2048, 00:11:30.616 "data_size": 63488 00:11:30.616 }, 00:11:30.616 { 00:11:30.616 "name": "BaseBdev2", 00:11:30.616 "uuid": "e48eea90-33ba-5903-9041-9a6e5489215d", 00:11:30.616 "is_configured": true, 00:11:30.616 "data_offset": 2048, 00:11:30.616 "data_size": 63488 00:11:30.616 } 00:11:30.616 ] 00:11:30.616 }' 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.616 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.185 [2024-11-20 11:23:38.898246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:31.185 [2024-11-20 11:23:38.898288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.185 [2024-11-20 11:23:38.901577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.185 [2024-11-20 11:23:38.901645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.185 [2024-11-20 11:23:38.901697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.185 [2024-11-20 11:23:38.901720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:31.185 { 00:11:31.185 "results": [ 00:11:31.185 { 00:11:31.185 "job": "raid_bdev1", 00:11:31.185 "core_mask": "0x1", 00:11:31.185 "workload": "randrw", 00:11:31.185 "percentage": 50, 00:11:31.185 "status": "finished", 00:11:31.185 "queue_depth": 1, 00:11:31.185 "io_size": 131072, 00:11:31.185 "runtime": 1.407089, 00:11:31.185 "iops": 10197.649189212623, 00:11:31.185 "mibps": 1274.706148651578, 00:11:31.185 "io_failed": 1, 00:11:31.185 "io_timeout": 0, 00:11:31.185 "avg_latency_us": 136.65400823566677, 00:11:31.185 "min_latency_us": 40.02909090909091, 00:11:31.185 "max_latency_us": 2040.5527272727272 00:11:31.185 } 00:11:31.185 ], 00:11:31.185 "core_count": 1 00:11:31.185 } 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62460 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62460 ']' 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62460 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62460 00:11:31.185 killing process with pid 62460 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62460' 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62460 00:11:31.185 [2024-11-20 11:23:38.945147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.185 11:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62460 00:11:31.453 [2024-11-20 11:23:39.074232] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RzOkt3Ko4F 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:32.389 11:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:32.389 00:11:32.389 real 0m4.656s 00:11:32.389 user 0m5.851s 00:11:32.389 sys 0m0.568s 00:11:32.648 ************************************ 00:11:32.648 END TEST raid_write_error_test 00:11:32.648 ************************************ 00:11:32.649 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.649 11:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.649 11:23:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:32.649 11:23:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:32.649 11:23:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.649 11:23:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.649 11:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.649 ************************************ 00:11:32.649 START TEST raid_state_function_test 00:11:32.649 ************************************ 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62608 00:11:32.649 Process raid pid: 62608 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62608' 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62608 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62608 ']' 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.649 11:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.649 [2024-11-20 11:23:40.420903] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:32.649 [2024-11-20 11:23:40.421180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.907 [2024-11-20 11:23:40.609337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.907 [2024-11-20 11:23:40.747731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.165 [2024-11-20 11:23:40.966361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.165 [2024-11-20 11:23:40.966418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.731 [2024-11-20 11:23:41.417667] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.731 [2024-11-20 11:23:41.417733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.731 [2024-11-20 11:23:41.417771] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.731 [2024-11-20 11:23:41.417789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.731 "name": "Existed_Raid", 00:11:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.731 "strip_size_kb": 0, 00:11:33.731 "state": "configuring", 00:11:33.731 "raid_level": "raid1", 00:11:33.731 "superblock": false, 00:11:33.731 "num_base_bdevs": 2, 00:11:33.731 "num_base_bdevs_discovered": 0, 00:11:33.731 "num_base_bdevs_operational": 2, 00:11:33.731 "base_bdevs_list": [ 00:11:33.731 { 00:11:33.731 "name": "BaseBdev1", 00:11:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.731 "is_configured": false, 00:11:33.731 "data_offset": 0, 00:11:33.731 "data_size": 0 00:11:33.731 }, 00:11:33.731 { 00:11:33.731 "name": "BaseBdev2", 00:11:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.731 "is_configured": false, 00:11:33.731 "data_offset": 0, 00:11:33.731 "data_size": 0 00:11:33.731 } 00:11:33.731 ] 00:11:33.731 }' 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.731 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 [2024-11-20 11:23:41.925796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.299 [2024-11-20 11:23:41.925839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 [2024-11-20 11:23:41.937760] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.299 [2024-11-20 11:23:41.937952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.299 [2024-11-20 11:23:41.937996] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.299 [2024-11-20 11:23:41.938034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 [2024-11-20 11:23:41.984048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.299 BaseBdev1 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.299 11:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 [ 00:11:34.299 { 00:11:34.299 "name": "BaseBdev1", 00:11:34.299 "aliases": [ 00:11:34.299 "dcb44473-3c84-41f0-b4e7-93bfaae75bee" 00:11:34.299 ], 00:11:34.299 "product_name": "Malloc disk", 00:11:34.299 "block_size": 512, 00:11:34.299 "num_blocks": 65536, 00:11:34.299 "uuid": "dcb44473-3c84-41f0-b4e7-93bfaae75bee", 00:11:34.299 "assigned_rate_limits": { 00:11:34.299 "rw_ios_per_sec": 0, 00:11:34.299 "rw_mbytes_per_sec": 0, 00:11:34.299 "r_mbytes_per_sec": 0, 00:11:34.299 "w_mbytes_per_sec": 0 00:11:34.299 }, 00:11:34.299 "claimed": true, 00:11:34.299 "claim_type": "exclusive_write", 00:11:34.299 "zoned": false, 00:11:34.299 "supported_io_types": { 00:11:34.299 "read": true, 00:11:34.299 "write": true, 00:11:34.299 "unmap": true, 00:11:34.299 "flush": true, 00:11:34.299 "reset": true, 00:11:34.299 "nvme_admin": false, 00:11:34.299 "nvme_io": false, 00:11:34.299 "nvme_io_md": false, 00:11:34.299 "write_zeroes": true, 00:11:34.299 "zcopy": true, 00:11:34.299 "get_zone_info": false, 00:11:34.299 "zone_management": false, 00:11:34.299 "zone_append": false, 00:11:34.299 "compare": false, 00:11:34.299 "compare_and_write": false, 00:11:34.299 "abort": true, 00:11:34.299 "seek_hole": false, 00:11:34.299 "seek_data": false, 00:11:34.299 "copy": true, 00:11:34.299 "nvme_iov_md": false 00:11:34.299 }, 00:11:34.299 "memory_domains": [ 00:11:34.299 { 00:11:34.299 "dma_device_id": "system", 00:11:34.299 "dma_device_type": 1 00:11:34.299 }, 00:11:34.299 { 00:11:34.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.299 "dma_device_type": 2 00:11:34.299 } 00:11:34.299 ], 00:11:34.299 "driver_specific": {} 00:11:34.299 } 00:11:34.299 ] 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.299 "name": "Existed_Raid", 00:11:34.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.299 "strip_size_kb": 0, 00:11:34.299 "state": "configuring", 00:11:34.299 "raid_level": "raid1", 00:11:34.299 "superblock": false, 00:11:34.299 "num_base_bdevs": 2, 00:11:34.299 "num_base_bdevs_discovered": 1, 00:11:34.299 "num_base_bdevs_operational": 2, 00:11:34.299 "base_bdevs_list": [ 00:11:34.299 { 00:11:34.299 "name": "BaseBdev1", 00:11:34.299 "uuid": "dcb44473-3c84-41f0-b4e7-93bfaae75bee", 00:11:34.299 "is_configured": true, 00:11:34.299 "data_offset": 0, 00:11:34.299 "data_size": 65536 00:11:34.299 }, 00:11:34.299 { 00:11:34.299 "name": "BaseBdev2", 00:11:34.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.299 "is_configured": false, 00:11:34.299 "data_offset": 0, 00:11:34.299 "data_size": 0 00:11:34.299 } 00:11:34.299 ] 00:11:34.299 }' 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.299 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.870 [2024-11-20 11:23:42.540303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.870 [2024-11-20 11:23:42.540362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.870 [2024-11-20 11:23:42.548347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.870 [2024-11-20 11:23:42.551208] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.870 [2024-11-20 11:23:42.551414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.870 "name": "Existed_Raid", 00:11:34.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.870 "strip_size_kb": 0, 00:11:34.870 "state": "configuring", 00:11:34.870 "raid_level": "raid1", 00:11:34.870 "superblock": false, 00:11:34.870 "num_base_bdevs": 2, 00:11:34.870 "num_base_bdevs_discovered": 1, 00:11:34.870 "num_base_bdevs_operational": 2, 00:11:34.870 "base_bdevs_list": [ 00:11:34.870 { 00:11:34.870 "name": "BaseBdev1", 00:11:34.870 "uuid": "dcb44473-3c84-41f0-b4e7-93bfaae75bee", 00:11:34.870 "is_configured": true, 00:11:34.870 "data_offset": 0, 00:11:34.870 "data_size": 65536 00:11:34.870 }, 00:11:34.870 { 00:11:34.870 "name": "BaseBdev2", 00:11:34.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.870 "is_configured": false, 00:11:34.870 "data_offset": 0, 00:11:34.870 "data_size": 0 00:11:34.870 } 00:11:34.870 ] 00:11:34.870 }' 00:11:34.870 11:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.871 11:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.438 [2024-11-20 11:23:43.095434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.438 [2024-11-20 11:23:43.095509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.438 [2024-11-20 11:23:43.095524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.438 [2024-11-20 11:23:43.095963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:35.438 [2024-11-20 11:23:43.096176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.438 [2024-11-20 11:23:43.096204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.438 [2024-11-20 11:23:43.096521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.438 BaseBdev2 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.438 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.438 [ 00:11:35.438 { 00:11:35.438 "name": "BaseBdev2", 00:11:35.438 "aliases": [ 00:11:35.438 "84754e46-5f2e-4a02-a7e8-356cc0b0601a" 00:11:35.438 ], 00:11:35.438 "product_name": "Malloc disk", 00:11:35.438 "block_size": 512, 00:11:35.438 "num_blocks": 65536, 00:11:35.438 "uuid": "84754e46-5f2e-4a02-a7e8-356cc0b0601a", 00:11:35.438 "assigned_rate_limits": { 00:11:35.438 "rw_ios_per_sec": 0, 00:11:35.438 "rw_mbytes_per_sec": 0, 00:11:35.438 "r_mbytes_per_sec": 0, 00:11:35.438 "w_mbytes_per_sec": 0 00:11:35.438 }, 00:11:35.438 "claimed": true, 00:11:35.438 "claim_type": "exclusive_write", 00:11:35.438 "zoned": false, 00:11:35.438 "supported_io_types": { 00:11:35.438 "read": true, 00:11:35.438 "write": true, 00:11:35.438 "unmap": true, 00:11:35.438 "flush": true, 00:11:35.438 "reset": true, 00:11:35.438 "nvme_admin": false, 00:11:35.438 "nvme_io": false, 00:11:35.438 "nvme_io_md": false, 00:11:35.438 "write_zeroes": true, 00:11:35.438 "zcopy": true, 00:11:35.438 "get_zone_info": false, 00:11:35.438 "zone_management": false, 00:11:35.438 "zone_append": false, 00:11:35.438 "compare": false, 00:11:35.438 "compare_and_write": false, 00:11:35.438 "abort": true, 00:11:35.438 "seek_hole": false, 00:11:35.438 "seek_data": false, 00:11:35.438 "copy": true, 00:11:35.438 "nvme_iov_md": false 00:11:35.438 }, 00:11:35.438 "memory_domains": [ 00:11:35.439 { 00:11:35.439 "dma_device_id": "system", 00:11:35.439 "dma_device_type": 1 00:11:35.439 }, 00:11:35.439 { 00:11:35.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.439 "dma_device_type": 2 00:11:35.439 } 00:11:35.439 ], 00:11:35.439 "driver_specific": {} 00:11:35.439 } 00:11:35.439 ] 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.439 "name": "Existed_Raid", 00:11:35.439 "uuid": "4bab9f36-12f1-4754-b3cc-d97d8659c771", 00:11:35.439 "strip_size_kb": 0, 00:11:35.439 "state": "online", 00:11:35.439 "raid_level": "raid1", 00:11:35.439 "superblock": false, 00:11:35.439 "num_base_bdevs": 2, 00:11:35.439 "num_base_bdevs_discovered": 2, 00:11:35.439 "num_base_bdevs_operational": 2, 00:11:35.439 "base_bdevs_list": [ 00:11:35.439 { 00:11:35.439 "name": "BaseBdev1", 00:11:35.439 "uuid": "dcb44473-3c84-41f0-b4e7-93bfaae75bee", 00:11:35.439 "is_configured": true, 00:11:35.439 "data_offset": 0, 00:11:35.439 "data_size": 65536 00:11:35.439 }, 00:11:35.439 { 00:11:35.439 "name": "BaseBdev2", 00:11:35.439 "uuid": "84754e46-5f2e-4a02-a7e8-356cc0b0601a", 00:11:35.439 "is_configured": true, 00:11:35.439 "data_offset": 0, 00:11:35.439 "data_size": 65536 00:11:35.439 } 00:11:35.439 ] 00:11:35.439 }' 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.439 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.006 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.006 [2024-11-20 11:23:43.672055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.007 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.007 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.007 "name": "Existed_Raid", 00:11:36.007 "aliases": [ 00:11:36.007 "4bab9f36-12f1-4754-b3cc-d97d8659c771" 00:11:36.007 ], 00:11:36.007 "product_name": "Raid Volume", 00:11:36.007 "block_size": 512, 00:11:36.007 "num_blocks": 65536, 00:11:36.007 "uuid": "4bab9f36-12f1-4754-b3cc-d97d8659c771", 00:11:36.007 "assigned_rate_limits": { 00:11:36.007 "rw_ios_per_sec": 0, 00:11:36.007 "rw_mbytes_per_sec": 0, 00:11:36.007 "r_mbytes_per_sec": 0, 00:11:36.007 "w_mbytes_per_sec": 0 00:11:36.007 }, 00:11:36.007 "claimed": false, 00:11:36.007 "zoned": false, 00:11:36.007 "supported_io_types": { 00:11:36.007 "read": true, 00:11:36.007 "write": true, 00:11:36.007 "unmap": false, 00:11:36.007 "flush": false, 00:11:36.007 "reset": true, 00:11:36.007 "nvme_admin": false, 00:11:36.007 "nvme_io": false, 00:11:36.007 "nvme_io_md": false, 00:11:36.007 "write_zeroes": true, 00:11:36.007 "zcopy": false, 00:11:36.007 "get_zone_info": false, 00:11:36.007 "zone_management": false, 00:11:36.007 "zone_append": false, 00:11:36.007 "compare": false, 00:11:36.007 "compare_and_write": false, 00:11:36.007 "abort": false, 00:11:36.007 "seek_hole": false, 00:11:36.007 "seek_data": false, 00:11:36.007 "copy": false, 00:11:36.007 "nvme_iov_md": false 00:11:36.007 }, 00:11:36.007 "memory_domains": [ 00:11:36.007 { 00:11:36.007 "dma_device_id": "system", 00:11:36.007 "dma_device_type": 1 00:11:36.007 }, 00:11:36.007 { 00:11:36.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.007 "dma_device_type": 2 00:11:36.007 }, 00:11:36.007 { 00:11:36.007 "dma_device_id": "system", 00:11:36.007 "dma_device_type": 1 00:11:36.007 }, 00:11:36.007 { 00:11:36.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.007 "dma_device_type": 2 00:11:36.007 } 00:11:36.007 ], 00:11:36.007 "driver_specific": { 00:11:36.007 "raid": { 00:11:36.007 "uuid": "4bab9f36-12f1-4754-b3cc-d97d8659c771", 00:11:36.007 "strip_size_kb": 0, 00:11:36.007 "state": "online", 00:11:36.007 "raid_level": "raid1", 00:11:36.007 "superblock": false, 00:11:36.007 "num_base_bdevs": 2, 00:11:36.007 "num_base_bdevs_discovered": 2, 00:11:36.007 "num_base_bdevs_operational": 2, 00:11:36.007 "base_bdevs_list": [ 00:11:36.007 { 00:11:36.007 "name": "BaseBdev1", 00:11:36.007 "uuid": "dcb44473-3c84-41f0-b4e7-93bfaae75bee", 00:11:36.007 "is_configured": true, 00:11:36.007 "data_offset": 0, 00:11:36.007 "data_size": 65536 00:11:36.007 }, 00:11:36.007 { 00:11:36.007 "name": "BaseBdev2", 00:11:36.007 "uuid": "84754e46-5f2e-4a02-a7e8-356cc0b0601a", 00:11:36.007 "is_configured": true, 00:11:36.007 "data_offset": 0, 00:11:36.007 "data_size": 65536 00:11:36.007 } 00:11:36.007 ] 00:11:36.007 } 00:11:36.007 } 00:11:36.007 }' 00:11:36.007 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.007 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.007 BaseBdev2' 00:11:36.007 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.329 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.330 11:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.330 [2024-11-20 11:23:43.999823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.330 "name": "Existed_Raid", 00:11:36.330 "uuid": "4bab9f36-12f1-4754-b3cc-d97d8659c771", 00:11:36.330 "strip_size_kb": 0, 00:11:36.330 "state": "online", 00:11:36.330 "raid_level": "raid1", 00:11:36.330 "superblock": false, 00:11:36.330 "num_base_bdevs": 2, 00:11:36.330 "num_base_bdevs_discovered": 1, 00:11:36.330 "num_base_bdevs_operational": 1, 00:11:36.330 "base_bdevs_list": [ 00:11:36.330 { 00:11:36.330 "name": null, 00:11:36.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.330 "is_configured": false, 00:11:36.330 "data_offset": 0, 00:11:36.330 "data_size": 65536 00:11:36.330 }, 00:11:36.330 { 00:11:36.330 "name": "BaseBdev2", 00:11:36.330 "uuid": "84754e46-5f2e-4a02-a7e8-356cc0b0601a", 00:11:36.330 "is_configured": true, 00:11:36.330 "data_offset": 0, 00:11:36.330 "data_size": 65536 00:11:36.330 } 00:11:36.330 ] 00:11:36.330 }' 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.330 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.897 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.897 [2024-11-20 11:23:44.654087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.897 [2024-11-20 11:23:44.654241] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.154 [2024-11-20 11:23:44.766921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.154 [2024-11-20 11:23:44.767014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.155 [2024-11-20 11:23:44.767039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62608 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62608 ']' 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62608 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62608 00:11:37.155 killing process with pid 62608 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62608' 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62608 00:11:37.155 11:23:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62608 00:11:37.155 [2024-11-20 11:23:44.862751] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.155 [2024-11-20 11:23:44.881837] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:38.529 00:11:38.529 real 0m5.881s 00:11:38.529 user 0m8.763s 00:11:38.529 sys 0m0.802s 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.529 ************************************ 00:11:38.529 END TEST raid_state_function_test 00:11:38.529 ************************************ 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.529 11:23:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:38.529 11:23:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:38.529 11:23:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.529 11:23:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.529 ************************************ 00:11:38.529 START TEST raid_state_function_test_sb 00:11:38.529 ************************************ 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:38.529 Process raid pid: 62862 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62862 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62862' 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62862 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62862 ']' 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.529 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.529 [2024-11-20 11:23:46.329461] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:38.530 [2024-11-20 11:23:46.330290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.787 [2024-11-20 11:23:46.521914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.047 [2024-11-20 11:23:46.687521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.305 [2024-11-20 11:23:46.895856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.305 [2024-11-20 11:23:46.895914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.870 [2024-11-20 11:23:47.512131] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.870 [2024-11-20 11:23:47.512198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.870 [2024-11-20 11:23:47.512216] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.870 [2024-11-20 11:23:47.512232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.870 "name": "Existed_Raid", 00:11:39.870 "uuid": "a8e7f572-3e12-4ebb-90a7-5ffde09d08da", 00:11:39.870 "strip_size_kb": 0, 00:11:39.870 "state": "configuring", 00:11:39.870 "raid_level": "raid1", 00:11:39.870 "superblock": true, 00:11:39.870 "num_base_bdevs": 2, 00:11:39.870 "num_base_bdevs_discovered": 0, 00:11:39.870 "num_base_bdevs_operational": 2, 00:11:39.870 "base_bdevs_list": [ 00:11:39.870 { 00:11:39.870 "name": "BaseBdev1", 00:11:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.870 "is_configured": false, 00:11:39.870 "data_offset": 0, 00:11:39.870 "data_size": 0 00:11:39.870 }, 00:11:39.870 { 00:11:39.870 "name": "BaseBdev2", 00:11:39.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.870 "is_configured": false, 00:11:39.870 "data_offset": 0, 00:11:39.870 "data_size": 0 00:11:39.870 } 00:11:39.870 ] 00:11:39.870 }' 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.870 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 [2024-11-20 11:23:47.984246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.438 [2024-11-20 11:23:47.984288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 [2024-11-20 11:23:47.992206] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.438 [2024-11-20 11:23:47.992270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.438 [2024-11-20 11:23:47.992286] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.438 [2024-11-20 11:23:47.992315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.438 11:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 [2024-11-20 11:23:48.036917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.438 BaseBdev1 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 [ 00:11:40.438 { 00:11:40.438 "name": "BaseBdev1", 00:11:40.438 "aliases": [ 00:11:40.438 "414d489e-2f80-4b25-9a6a-0f336e569bd8" 00:11:40.438 ], 00:11:40.438 "product_name": "Malloc disk", 00:11:40.438 "block_size": 512, 00:11:40.438 "num_blocks": 65536, 00:11:40.438 "uuid": "414d489e-2f80-4b25-9a6a-0f336e569bd8", 00:11:40.438 "assigned_rate_limits": { 00:11:40.438 "rw_ios_per_sec": 0, 00:11:40.438 "rw_mbytes_per_sec": 0, 00:11:40.438 "r_mbytes_per_sec": 0, 00:11:40.438 "w_mbytes_per_sec": 0 00:11:40.438 }, 00:11:40.438 "claimed": true, 00:11:40.438 "claim_type": "exclusive_write", 00:11:40.438 "zoned": false, 00:11:40.438 "supported_io_types": { 00:11:40.438 "read": true, 00:11:40.438 "write": true, 00:11:40.438 "unmap": true, 00:11:40.438 "flush": true, 00:11:40.438 "reset": true, 00:11:40.438 "nvme_admin": false, 00:11:40.438 "nvme_io": false, 00:11:40.438 "nvme_io_md": false, 00:11:40.438 "write_zeroes": true, 00:11:40.438 "zcopy": true, 00:11:40.438 "get_zone_info": false, 00:11:40.438 "zone_management": false, 00:11:40.438 "zone_append": false, 00:11:40.438 "compare": false, 00:11:40.438 "compare_and_write": false, 00:11:40.438 "abort": true, 00:11:40.438 "seek_hole": false, 00:11:40.438 "seek_data": false, 00:11:40.438 "copy": true, 00:11:40.438 "nvme_iov_md": false 00:11:40.438 }, 00:11:40.438 "memory_domains": [ 00:11:40.438 { 00:11:40.438 "dma_device_id": "system", 00:11:40.438 "dma_device_type": 1 00:11:40.438 }, 00:11:40.438 { 00:11:40.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.438 "dma_device_type": 2 00:11:40.438 } 00:11:40.438 ], 00:11:40.438 "driver_specific": {} 00:11:40.438 } 00:11:40.438 ] 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.438 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.438 "name": "Existed_Raid", 00:11:40.438 "uuid": "b03bf998-17ed-4b6d-9619-5d383056317d", 00:11:40.438 "strip_size_kb": 0, 00:11:40.438 "state": "configuring", 00:11:40.438 "raid_level": "raid1", 00:11:40.438 "superblock": true, 00:11:40.438 "num_base_bdevs": 2, 00:11:40.438 "num_base_bdevs_discovered": 1, 00:11:40.438 "num_base_bdevs_operational": 2, 00:11:40.438 "base_bdevs_list": [ 00:11:40.438 { 00:11:40.439 "name": "BaseBdev1", 00:11:40.439 "uuid": "414d489e-2f80-4b25-9a6a-0f336e569bd8", 00:11:40.439 "is_configured": true, 00:11:40.439 "data_offset": 2048, 00:11:40.439 "data_size": 63488 00:11:40.439 }, 00:11:40.439 { 00:11:40.439 "name": "BaseBdev2", 00:11:40.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.439 "is_configured": false, 00:11:40.439 "data_offset": 0, 00:11:40.439 "data_size": 0 00:11:40.439 } 00:11:40.439 ] 00:11:40.439 }' 00:11:40.439 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.439 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.061 [2024-11-20 11:23:48.549108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.061 [2024-11-20 11:23:48.549186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.061 [2024-11-20 11:23:48.557152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.061 [2024-11-20 11:23:48.559705] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.061 [2024-11-20 11:23:48.559757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.061 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.062 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.062 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.062 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.062 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.062 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.062 "name": "Existed_Raid", 00:11:41.062 "uuid": "d2f1faaf-4134-4708-bb60-f26d68aefb77", 00:11:41.062 "strip_size_kb": 0, 00:11:41.062 "state": "configuring", 00:11:41.062 "raid_level": "raid1", 00:11:41.062 "superblock": true, 00:11:41.062 "num_base_bdevs": 2, 00:11:41.062 "num_base_bdevs_discovered": 1, 00:11:41.062 "num_base_bdevs_operational": 2, 00:11:41.062 "base_bdevs_list": [ 00:11:41.062 { 00:11:41.062 "name": "BaseBdev1", 00:11:41.062 "uuid": "414d489e-2f80-4b25-9a6a-0f336e569bd8", 00:11:41.062 "is_configured": true, 00:11:41.062 "data_offset": 2048, 00:11:41.062 "data_size": 63488 00:11:41.062 }, 00:11:41.062 { 00:11:41.062 "name": "BaseBdev2", 00:11:41.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.062 "is_configured": false, 00:11:41.062 "data_offset": 0, 00:11:41.062 "data_size": 0 00:11:41.062 } 00:11:41.062 ] 00:11:41.062 }' 00:11:41.062 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.062 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 [2024-11-20 11:23:49.039843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.321 [2024-11-20 11:23:49.040158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.321 [2024-11-20 11:23:49.040177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.321 BaseBdev2 00:11:41.321 [2024-11-20 11:23:49.040482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:41.321 [2024-11-20 11:23:49.040721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.321 [2024-11-20 11:23:49.040743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:41.321 [2024-11-20 11:23:49.040916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 [ 00:11:41.321 { 00:11:41.321 "name": "BaseBdev2", 00:11:41.321 "aliases": [ 00:11:41.321 "00162c5b-ad12-4917-b872-5e50e0bd5f15" 00:11:41.321 ], 00:11:41.321 "product_name": "Malloc disk", 00:11:41.321 "block_size": 512, 00:11:41.321 "num_blocks": 65536, 00:11:41.321 "uuid": "00162c5b-ad12-4917-b872-5e50e0bd5f15", 00:11:41.321 "assigned_rate_limits": { 00:11:41.321 "rw_ios_per_sec": 0, 00:11:41.321 "rw_mbytes_per_sec": 0, 00:11:41.321 "r_mbytes_per_sec": 0, 00:11:41.321 "w_mbytes_per_sec": 0 00:11:41.321 }, 00:11:41.321 "claimed": true, 00:11:41.321 "claim_type": "exclusive_write", 00:11:41.321 "zoned": false, 00:11:41.321 "supported_io_types": { 00:11:41.321 "read": true, 00:11:41.321 "write": true, 00:11:41.321 "unmap": true, 00:11:41.321 "flush": true, 00:11:41.321 "reset": true, 00:11:41.321 "nvme_admin": false, 00:11:41.321 "nvme_io": false, 00:11:41.321 "nvme_io_md": false, 00:11:41.321 "write_zeroes": true, 00:11:41.321 "zcopy": true, 00:11:41.321 "get_zone_info": false, 00:11:41.321 "zone_management": false, 00:11:41.321 "zone_append": false, 00:11:41.321 "compare": false, 00:11:41.321 "compare_and_write": false, 00:11:41.321 "abort": true, 00:11:41.321 "seek_hole": false, 00:11:41.321 "seek_data": false, 00:11:41.321 "copy": true, 00:11:41.321 "nvme_iov_md": false 00:11:41.321 }, 00:11:41.321 "memory_domains": [ 00:11:41.321 { 00:11:41.321 "dma_device_id": "system", 00:11:41.321 "dma_device_type": 1 00:11:41.321 }, 00:11:41.321 { 00:11:41.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.321 "dma_device_type": 2 00:11:41.321 } 00:11:41.321 ], 00:11:41.321 "driver_specific": {} 00:11:41.321 } 00:11:41.321 ] 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.321 "name": "Existed_Raid", 00:11:41.321 "uuid": "d2f1faaf-4134-4708-bb60-f26d68aefb77", 00:11:41.321 "strip_size_kb": 0, 00:11:41.321 "state": "online", 00:11:41.321 "raid_level": "raid1", 00:11:41.321 "superblock": true, 00:11:41.321 "num_base_bdevs": 2, 00:11:41.321 "num_base_bdevs_discovered": 2, 00:11:41.321 "num_base_bdevs_operational": 2, 00:11:41.321 "base_bdevs_list": [ 00:11:41.321 { 00:11:41.321 "name": "BaseBdev1", 00:11:41.321 "uuid": "414d489e-2f80-4b25-9a6a-0f336e569bd8", 00:11:41.321 "is_configured": true, 00:11:41.321 "data_offset": 2048, 00:11:41.321 "data_size": 63488 00:11:41.321 }, 00:11:41.321 { 00:11:41.321 "name": "BaseBdev2", 00:11:41.321 "uuid": "00162c5b-ad12-4917-b872-5e50e0bd5f15", 00:11:41.321 "is_configured": true, 00:11:41.321 "data_offset": 2048, 00:11:41.321 "data_size": 63488 00:11:41.321 } 00:11:41.321 ] 00:11:41.321 }' 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.321 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.888 [2024-11-20 11:23:49.548391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.888 "name": "Existed_Raid", 00:11:41.888 "aliases": [ 00:11:41.888 "d2f1faaf-4134-4708-bb60-f26d68aefb77" 00:11:41.888 ], 00:11:41.888 "product_name": "Raid Volume", 00:11:41.888 "block_size": 512, 00:11:41.888 "num_blocks": 63488, 00:11:41.888 "uuid": "d2f1faaf-4134-4708-bb60-f26d68aefb77", 00:11:41.888 "assigned_rate_limits": { 00:11:41.888 "rw_ios_per_sec": 0, 00:11:41.888 "rw_mbytes_per_sec": 0, 00:11:41.888 "r_mbytes_per_sec": 0, 00:11:41.888 "w_mbytes_per_sec": 0 00:11:41.888 }, 00:11:41.888 "claimed": false, 00:11:41.888 "zoned": false, 00:11:41.888 "supported_io_types": { 00:11:41.888 "read": true, 00:11:41.888 "write": true, 00:11:41.888 "unmap": false, 00:11:41.888 "flush": false, 00:11:41.888 "reset": true, 00:11:41.888 "nvme_admin": false, 00:11:41.888 "nvme_io": false, 00:11:41.888 "nvme_io_md": false, 00:11:41.888 "write_zeroes": true, 00:11:41.888 "zcopy": false, 00:11:41.888 "get_zone_info": false, 00:11:41.888 "zone_management": false, 00:11:41.888 "zone_append": false, 00:11:41.888 "compare": false, 00:11:41.888 "compare_and_write": false, 00:11:41.888 "abort": false, 00:11:41.888 "seek_hole": false, 00:11:41.888 "seek_data": false, 00:11:41.888 "copy": false, 00:11:41.888 "nvme_iov_md": false 00:11:41.888 }, 00:11:41.888 "memory_domains": [ 00:11:41.888 { 00:11:41.888 "dma_device_id": "system", 00:11:41.888 "dma_device_type": 1 00:11:41.888 }, 00:11:41.888 { 00:11:41.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.888 "dma_device_type": 2 00:11:41.888 }, 00:11:41.888 { 00:11:41.888 "dma_device_id": "system", 00:11:41.888 "dma_device_type": 1 00:11:41.888 }, 00:11:41.888 { 00:11:41.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.888 "dma_device_type": 2 00:11:41.888 } 00:11:41.888 ], 00:11:41.888 "driver_specific": { 00:11:41.888 "raid": { 00:11:41.888 "uuid": "d2f1faaf-4134-4708-bb60-f26d68aefb77", 00:11:41.888 "strip_size_kb": 0, 00:11:41.888 "state": "online", 00:11:41.888 "raid_level": "raid1", 00:11:41.888 "superblock": true, 00:11:41.888 "num_base_bdevs": 2, 00:11:41.888 "num_base_bdevs_discovered": 2, 00:11:41.888 "num_base_bdevs_operational": 2, 00:11:41.888 "base_bdevs_list": [ 00:11:41.888 { 00:11:41.888 "name": "BaseBdev1", 00:11:41.888 "uuid": "414d489e-2f80-4b25-9a6a-0f336e569bd8", 00:11:41.888 "is_configured": true, 00:11:41.888 "data_offset": 2048, 00:11:41.888 "data_size": 63488 00:11:41.888 }, 00:11:41.888 { 00:11:41.888 "name": "BaseBdev2", 00:11:41.888 "uuid": "00162c5b-ad12-4917-b872-5e50e0bd5f15", 00:11:41.888 "is_configured": true, 00:11:41.888 "data_offset": 2048, 00:11:41.888 "data_size": 63488 00:11:41.888 } 00:11:41.888 ] 00:11:41.888 } 00:11:41.888 } 00:11:41.888 }' 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:41.888 BaseBdev2' 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.888 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.148 [2024-11-20 11:23:49.832251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.148 "name": "Existed_Raid", 00:11:42.148 "uuid": "d2f1faaf-4134-4708-bb60-f26d68aefb77", 00:11:42.148 "strip_size_kb": 0, 00:11:42.148 "state": "online", 00:11:42.148 "raid_level": "raid1", 00:11:42.148 "superblock": true, 00:11:42.148 "num_base_bdevs": 2, 00:11:42.148 "num_base_bdevs_discovered": 1, 00:11:42.148 "num_base_bdevs_operational": 1, 00:11:42.148 "base_bdevs_list": [ 00:11:42.148 { 00:11:42.148 "name": null, 00:11:42.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.148 "is_configured": false, 00:11:42.148 "data_offset": 0, 00:11:42.148 "data_size": 63488 00:11:42.148 }, 00:11:42.148 { 00:11:42.148 "name": "BaseBdev2", 00:11:42.148 "uuid": "00162c5b-ad12-4917-b872-5e50e0bd5f15", 00:11:42.148 "is_configured": true, 00:11:42.148 "data_offset": 2048, 00:11:42.148 "data_size": 63488 00:11:42.148 } 00:11:42.148 ] 00:11:42.148 }' 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.148 11:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.716 [2024-11-20 11:23:50.449074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:42.716 [2024-11-20 11:23:50.449204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.716 [2024-11-20 11:23:50.533504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.716 [2024-11-20 11:23:50.533573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.716 [2024-11-20 11:23:50.533593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.716 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.974 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:42.974 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:42.974 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:42.974 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62862 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62862 ']' 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62862 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62862 00:11:42.975 killing process with pid 62862 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62862' 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62862 00:11:42.975 [2024-11-20 11:23:50.622542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.975 11:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62862 00:11:42.975 [2024-11-20 11:23:50.637079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.911 ************************************ 00:11:43.911 END TEST raid_state_function_test_sb 00:11:43.911 ************************************ 00:11:43.911 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.911 00:11:43.911 real 0m5.452s 00:11:43.911 user 0m8.226s 00:11:43.911 sys 0m0.763s 00:11:43.911 11:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.911 11:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.911 11:23:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:43.911 11:23:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.911 11:23:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.911 11:23:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.911 ************************************ 00:11:43.911 START TEST raid_superblock_test 00:11:43.912 ************************************ 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63120 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63120 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63120 ']' 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.912 11:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.171 [2024-11-20 11:23:51.828273] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:44.171 [2024-11-20 11:23:51.828468] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63120 ] 00:11:44.171 [2024-11-20 11:23:52.009404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.429 [2024-11-20 11:23:52.141633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.689 [2024-11-20 11:23:52.344944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.689 [2024-11-20 11:23:52.345030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.257 malloc1 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.257 [2024-11-20 11:23:52.927620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.257 [2024-11-20 11:23:52.927710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.257 [2024-11-20 11:23:52.927750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.257 [2024-11-20 11:23:52.927765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.257 [2024-11-20 11:23:52.931188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.257 [2024-11-20 11:23:52.931244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.257 pt1 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.257 malloc2 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.257 [2024-11-20 11:23:52.980075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.257 [2024-11-20 11:23:52.980141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.257 [2024-11-20 11:23:52.980172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.257 [2024-11-20 11:23:52.980186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.257 [2024-11-20 11:23:52.983044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.257 [2024-11-20 11:23:52.983084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.257 pt2 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.257 [2024-11-20 11:23:52.988157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.257 [2024-11-20 11:23:52.990680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.257 [2024-11-20 11:23:52.990911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:45.257 [2024-11-20 11:23:52.990935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.257 [2024-11-20 11:23:52.991265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:45.257 [2024-11-20 11:23:52.991470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:45.257 [2024-11-20 11:23:52.991495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:45.257 [2024-11-20 11:23:52.991715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.257 11:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.257 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.257 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.257 "name": "raid_bdev1", 00:11:45.257 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:45.257 "strip_size_kb": 0, 00:11:45.257 "state": "online", 00:11:45.257 "raid_level": "raid1", 00:11:45.257 "superblock": true, 00:11:45.257 "num_base_bdevs": 2, 00:11:45.257 "num_base_bdevs_discovered": 2, 00:11:45.257 "num_base_bdevs_operational": 2, 00:11:45.257 "base_bdevs_list": [ 00:11:45.257 { 00:11:45.257 "name": "pt1", 00:11:45.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.257 "is_configured": true, 00:11:45.257 "data_offset": 2048, 00:11:45.257 "data_size": 63488 00:11:45.257 }, 00:11:45.257 { 00:11:45.257 "name": "pt2", 00:11:45.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.257 "is_configured": true, 00:11:45.257 "data_offset": 2048, 00:11:45.257 "data_size": 63488 00:11:45.257 } 00:11:45.257 ] 00:11:45.257 }' 00:11:45.257 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.258 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.828 [2024-11-20 11:23:53.512887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.828 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.828 "name": "raid_bdev1", 00:11:45.828 "aliases": [ 00:11:45.828 "127998d7-67f3-433d-839c-1cf599049d96" 00:11:45.828 ], 00:11:45.828 "product_name": "Raid Volume", 00:11:45.828 "block_size": 512, 00:11:45.828 "num_blocks": 63488, 00:11:45.828 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:45.828 "assigned_rate_limits": { 00:11:45.828 "rw_ios_per_sec": 0, 00:11:45.828 "rw_mbytes_per_sec": 0, 00:11:45.828 "r_mbytes_per_sec": 0, 00:11:45.828 "w_mbytes_per_sec": 0 00:11:45.828 }, 00:11:45.828 "claimed": false, 00:11:45.828 "zoned": false, 00:11:45.828 "supported_io_types": { 00:11:45.828 "read": true, 00:11:45.828 "write": true, 00:11:45.828 "unmap": false, 00:11:45.828 "flush": false, 00:11:45.828 "reset": true, 00:11:45.828 "nvme_admin": false, 00:11:45.828 "nvme_io": false, 00:11:45.828 "nvme_io_md": false, 00:11:45.828 "write_zeroes": true, 00:11:45.828 "zcopy": false, 00:11:45.828 "get_zone_info": false, 00:11:45.828 "zone_management": false, 00:11:45.828 "zone_append": false, 00:11:45.828 "compare": false, 00:11:45.828 "compare_and_write": false, 00:11:45.828 "abort": false, 00:11:45.828 "seek_hole": false, 00:11:45.828 "seek_data": false, 00:11:45.828 "copy": false, 00:11:45.828 "nvme_iov_md": false 00:11:45.828 }, 00:11:45.828 "memory_domains": [ 00:11:45.828 { 00:11:45.828 "dma_device_id": "system", 00:11:45.828 "dma_device_type": 1 00:11:45.828 }, 00:11:45.828 { 00:11:45.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.828 "dma_device_type": 2 00:11:45.828 }, 00:11:45.828 { 00:11:45.828 "dma_device_id": "system", 00:11:45.828 "dma_device_type": 1 00:11:45.828 }, 00:11:45.828 { 00:11:45.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.828 "dma_device_type": 2 00:11:45.828 } 00:11:45.828 ], 00:11:45.828 "driver_specific": { 00:11:45.828 "raid": { 00:11:45.829 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:45.829 "strip_size_kb": 0, 00:11:45.829 "state": "online", 00:11:45.829 "raid_level": "raid1", 00:11:45.829 "superblock": true, 00:11:45.829 "num_base_bdevs": 2, 00:11:45.829 "num_base_bdevs_discovered": 2, 00:11:45.829 "num_base_bdevs_operational": 2, 00:11:45.829 "base_bdevs_list": [ 00:11:45.829 { 00:11:45.829 "name": "pt1", 00:11:45.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.829 "is_configured": true, 00:11:45.829 "data_offset": 2048, 00:11:45.829 "data_size": 63488 00:11:45.829 }, 00:11:45.829 { 00:11:45.829 "name": "pt2", 00:11:45.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.829 "is_configured": true, 00:11:45.829 "data_offset": 2048, 00:11:45.829 "data_size": 63488 00:11:45.829 } 00:11:45.829 ] 00:11:45.829 } 00:11:45.829 } 00:11:45.829 }' 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:45.829 pt2' 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.829 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.098 [2024-11-20 11:23:53.760926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=127998d7-67f3-433d-839c-1cf599049d96 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 127998d7-67f3-433d-839c-1cf599049d96 ']' 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.098 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 [2024-11-20 11:23:53.808528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.099 [2024-11-20 11:23:53.808562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.099 [2024-11-20 11:23:53.808720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.099 [2024-11-20 11:23:53.808800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.099 [2024-11-20 11:23:53.808825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.099 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.358 [2024-11-20 11:23:53.960606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.358 [2024-11-20 11:23:53.963310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.358 [2024-11-20 11:23:53.963415] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:46.358 [2024-11-20 11:23:53.963487] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:46.358 [2024-11-20 11:23:53.963529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.358 [2024-11-20 11:23:53.963560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:46.358 request: 00:11:46.358 { 00:11:46.358 "name": "raid_bdev1", 00:11:46.358 "raid_level": "raid1", 00:11:46.358 "base_bdevs": [ 00:11:46.358 "malloc1", 00:11:46.358 "malloc2" 00:11:46.358 ], 00:11:46.358 "superblock": false, 00:11:46.358 "method": "bdev_raid_create", 00:11:46.358 "req_id": 1 00:11:46.358 } 00:11:46.358 Got JSON-RPC error response 00:11:46.358 response: 00:11:46.358 { 00:11:46.358 "code": -17, 00:11:46.358 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.358 } 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.358 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.358 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:46.358 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:46.358 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 [2024-11-20 11:23:54.024655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.359 [2024-11-20 11:23:54.024741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.359 [2024-11-20 11:23:54.024768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:46.359 [2024-11-20 11:23:54.024785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.359 [2024-11-20 11:23:54.027716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.359 [2024-11-20 11:23:54.027761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.359 [2024-11-20 11:23:54.027869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:46.359 [2024-11-20 11:23:54.027962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.359 pt1 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.359 "name": "raid_bdev1", 00:11:46.359 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:46.359 "strip_size_kb": 0, 00:11:46.359 "state": "configuring", 00:11:46.359 "raid_level": "raid1", 00:11:46.359 "superblock": true, 00:11:46.359 "num_base_bdevs": 2, 00:11:46.359 "num_base_bdevs_discovered": 1, 00:11:46.359 "num_base_bdevs_operational": 2, 00:11:46.359 "base_bdevs_list": [ 00:11:46.359 { 00:11:46.359 "name": "pt1", 00:11:46.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.359 "is_configured": true, 00:11:46.359 "data_offset": 2048, 00:11:46.359 "data_size": 63488 00:11:46.359 }, 00:11:46.359 { 00:11:46.359 "name": null, 00:11:46.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.359 "is_configured": false, 00:11:46.359 "data_offset": 2048, 00:11:46.359 "data_size": 63488 00:11:46.359 } 00:11:46.359 ] 00:11:46.359 }' 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.359 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.926 [2024-11-20 11:23:54.540901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.926 [2024-11-20 11:23:54.540987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.926 [2024-11-20 11:23:54.541033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:46.926 [2024-11-20 11:23:54.541050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.926 [2024-11-20 11:23:54.541622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.926 [2024-11-20 11:23:54.541672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.926 [2024-11-20 11:23:54.541791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.926 [2024-11-20 11:23:54.541828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.926 [2024-11-20 11:23:54.541979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.926 [2024-11-20 11:23:54.542000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.926 [2024-11-20 11:23:54.542296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:46.926 [2024-11-20 11:23:54.542484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.926 [2024-11-20 11:23:54.542500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:46.926 [2024-11-20 11:23:54.542688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.926 pt2 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.926 "name": "raid_bdev1", 00:11:46.926 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:46.926 "strip_size_kb": 0, 00:11:46.926 "state": "online", 00:11:46.926 "raid_level": "raid1", 00:11:46.926 "superblock": true, 00:11:46.926 "num_base_bdevs": 2, 00:11:46.926 "num_base_bdevs_discovered": 2, 00:11:46.926 "num_base_bdevs_operational": 2, 00:11:46.926 "base_bdevs_list": [ 00:11:46.926 { 00:11:46.926 "name": "pt1", 00:11:46.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.926 "is_configured": true, 00:11:46.926 "data_offset": 2048, 00:11:46.926 "data_size": 63488 00:11:46.926 }, 00:11:46.926 { 00:11:46.926 "name": "pt2", 00:11:46.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.926 "is_configured": true, 00:11:46.926 "data_offset": 2048, 00:11:46.926 "data_size": 63488 00:11:46.926 } 00:11:46.926 ] 00:11:46.926 }' 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.926 11:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.494 [2024-11-20 11:23:55.137385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.494 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.494 "name": "raid_bdev1", 00:11:47.494 "aliases": [ 00:11:47.494 "127998d7-67f3-433d-839c-1cf599049d96" 00:11:47.494 ], 00:11:47.494 "product_name": "Raid Volume", 00:11:47.494 "block_size": 512, 00:11:47.494 "num_blocks": 63488, 00:11:47.494 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:47.494 "assigned_rate_limits": { 00:11:47.494 "rw_ios_per_sec": 0, 00:11:47.494 "rw_mbytes_per_sec": 0, 00:11:47.494 "r_mbytes_per_sec": 0, 00:11:47.494 "w_mbytes_per_sec": 0 00:11:47.494 }, 00:11:47.494 "claimed": false, 00:11:47.494 "zoned": false, 00:11:47.494 "supported_io_types": { 00:11:47.494 "read": true, 00:11:47.494 "write": true, 00:11:47.494 "unmap": false, 00:11:47.494 "flush": false, 00:11:47.494 "reset": true, 00:11:47.494 "nvme_admin": false, 00:11:47.494 "nvme_io": false, 00:11:47.494 "nvme_io_md": false, 00:11:47.494 "write_zeroes": true, 00:11:47.494 "zcopy": false, 00:11:47.494 "get_zone_info": false, 00:11:47.494 "zone_management": false, 00:11:47.494 "zone_append": false, 00:11:47.494 "compare": false, 00:11:47.494 "compare_and_write": false, 00:11:47.494 "abort": false, 00:11:47.494 "seek_hole": false, 00:11:47.494 "seek_data": false, 00:11:47.494 "copy": false, 00:11:47.494 "nvme_iov_md": false 00:11:47.494 }, 00:11:47.494 "memory_domains": [ 00:11:47.494 { 00:11:47.494 "dma_device_id": "system", 00:11:47.494 "dma_device_type": 1 00:11:47.494 }, 00:11:47.494 { 00:11:47.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.494 "dma_device_type": 2 00:11:47.495 }, 00:11:47.495 { 00:11:47.495 "dma_device_id": "system", 00:11:47.495 "dma_device_type": 1 00:11:47.495 }, 00:11:47.495 { 00:11:47.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.495 "dma_device_type": 2 00:11:47.495 } 00:11:47.495 ], 00:11:47.495 "driver_specific": { 00:11:47.495 "raid": { 00:11:47.495 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:47.495 "strip_size_kb": 0, 00:11:47.495 "state": "online", 00:11:47.495 "raid_level": "raid1", 00:11:47.495 "superblock": true, 00:11:47.495 "num_base_bdevs": 2, 00:11:47.495 "num_base_bdevs_discovered": 2, 00:11:47.495 "num_base_bdevs_operational": 2, 00:11:47.495 "base_bdevs_list": [ 00:11:47.495 { 00:11:47.495 "name": "pt1", 00:11:47.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.495 "is_configured": true, 00:11:47.495 "data_offset": 2048, 00:11:47.495 "data_size": 63488 00:11:47.495 }, 00:11:47.495 { 00:11:47.495 "name": "pt2", 00:11:47.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.495 "is_configured": true, 00:11:47.495 "data_offset": 2048, 00:11:47.495 "data_size": 63488 00:11:47.495 } 00:11:47.495 ] 00:11:47.495 } 00:11:47.495 } 00:11:47.495 }' 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:47.495 pt2' 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.495 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.754 [2024-11-20 11:23:55.401396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 127998d7-67f3-433d-839c-1cf599049d96 '!=' 127998d7-67f3-433d-839c-1cf599049d96 ']' 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.754 [2024-11-20 11:23:55.453141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.754 "name": "raid_bdev1", 00:11:47.754 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:47.754 "strip_size_kb": 0, 00:11:47.754 "state": "online", 00:11:47.754 "raid_level": "raid1", 00:11:47.754 "superblock": true, 00:11:47.754 "num_base_bdevs": 2, 00:11:47.754 "num_base_bdevs_discovered": 1, 00:11:47.754 "num_base_bdevs_operational": 1, 00:11:47.754 "base_bdevs_list": [ 00:11:47.754 { 00:11:47.754 "name": null, 00:11:47.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.754 "is_configured": false, 00:11:47.754 "data_offset": 0, 00:11:47.754 "data_size": 63488 00:11:47.754 }, 00:11:47.754 { 00:11:47.754 "name": "pt2", 00:11:47.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.754 "is_configured": true, 00:11:47.754 "data_offset": 2048, 00:11:47.754 "data_size": 63488 00:11:47.754 } 00:11:47.754 ] 00:11:47.754 }' 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.754 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.323 [2024-11-20 11:23:55.953286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.323 [2024-11-20 11:23:55.953542] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.323 [2024-11-20 11:23:55.953783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.323 [2024-11-20 11:23:55.953963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.323 [2024-11-20 11:23:55.953997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.323 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.323 [2024-11-20 11:23:56.021248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.323 [2024-11-20 11:23:56.021323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.323 [2024-11-20 11:23:56.021350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:48.323 [2024-11-20 11:23:56.021367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.323 [2024-11-20 11:23:56.024315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.323 [2024-11-20 11:23:56.024479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.323 [2024-11-20 11:23:56.024590] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:48.323 [2024-11-20 11:23:56.024673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.323 [2024-11-20 11:23:56.024803] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:48.323 [2024-11-20 11:23:56.024826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.323 [2024-11-20 11:23:56.025112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:48.323 [2024-11-20 11:23:56.025297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:48.323 [2024-11-20 11:23:56.025314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:48.323 [2024-11-20 11:23:56.025532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.323 pt2 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.323 "name": "raid_bdev1", 00:11:48.323 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:48.323 "strip_size_kb": 0, 00:11:48.323 "state": "online", 00:11:48.323 "raid_level": "raid1", 00:11:48.323 "superblock": true, 00:11:48.323 "num_base_bdevs": 2, 00:11:48.323 "num_base_bdevs_discovered": 1, 00:11:48.323 "num_base_bdevs_operational": 1, 00:11:48.323 "base_bdevs_list": [ 00:11:48.323 { 00:11:48.323 "name": null, 00:11:48.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.323 "is_configured": false, 00:11:48.323 "data_offset": 2048, 00:11:48.323 "data_size": 63488 00:11:48.323 }, 00:11:48.323 { 00:11:48.323 "name": "pt2", 00:11:48.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.323 "is_configured": true, 00:11:48.323 "data_offset": 2048, 00:11:48.323 "data_size": 63488 00:11:48.323 } 00:11:48.323 ] 00:11:48.323 }' 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.323 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.890 [2024-11-20 11:23:56.545581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.890 [2024-11-20 11:23:56.545618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.890 [2024-11-20 11:23:56.545939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.890 [2024-11-20 11:23:56.546027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.890 [2024-11-20 11:23:56.546045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.890 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.890 [2024-11-20 11:23:56.609610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.890 [2024-11-20 11:23:56.609692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.890 [2024-11-20 11:23:56.609722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:48.890 [2024-11-20 11:23:56.609750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.890 [2024-11-20 11:23:56.612617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.890 [2024-11-20 11:23:56.612804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.890 [2024-11-20 11:23:56.612924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:48.890 [2024-11-20 11:23:56.612992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.890 [2024-11-20 11:23:56.613162] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:48.890 [2024-11-20 11:23:56.613180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.890 [2024-11-20 11:23:56.613203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:48.890 [2024-11-20 11:23:56.613272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.890 [2024-11-20 11:23:56.613375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:48.891 [2024-11-20 11:23:56.613391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.891 [2024-11-20 11:23:56.613710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:48.891 [2024-11-20 11:23:56.613905] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:48.891 [2024-11-20 11:23:56.613926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:48.891 [2024-11-20 11:23:56.614164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.891 pt1 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.891 "name": "raid_bdev1", 00:11:48.891 "uuid": "127998d7-67f3-433d-839c-1cf599049d96", 00:11:48.891 "strip_size_kb": 0, 00:11:48.891 "state": "online", 00:11:48.891 "raid_level": "raid1", 00:11:48.891 "superblock": true, 00:11:48.891 "num_base_bdevs": 2, 00:11:48.891 "num_base_bdevs_discovered": 1, 00:11:48.891 "num_base_bdevs_operational": 1, 00:11:48.891 "base_bdevs_list": [ 00:11:48.891 { 00:11:48.891 "name": null, 00:11:48.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.891 "is_configured": false, 00:11:48.891 "data_offset": 2048, 00:11:48.891 "data_size": 63488 00:11:48.891 }, 00:11:48.891 { 00:11:48.891 "name": "pt2", 00:11:48.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.891 "is_configured": true, 00:11:48.891 "data_offset": 2048, 00:11:48.891 "data_size": 63488 00:11:48.891 } 00:11:48.891 ] 00:11:48.891 }' 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.891 11:23:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.460 [2024-11-20 11:23:57.194512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 127998d7-67f3-433d-839c-1cf599049d96 '!=' 127998d7-67f3-433d-839c-1cf599049d96 ']' 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63120 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63120 ']' 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63120 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63120 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.460 killing process with pid 63120 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63120' 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63120 00:11:49.460 [2024-11-20 11:23:57.276656] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.460 [2024-11-20 11:23:57.276766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.460 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63120 00:11:49.460 [2024-11-20 11:23:57.276832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.460 [2024-11-20 11:23:57.276856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:49.718 [2024-11-20 11:23:57.462227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.653 ************************************ 00:11:50.653 END TEST raid_superblock_test 00:11:50.653 ************************************ 00:11:50.653 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:50.653 00:11:50.653 real 0m6.756s 00:11:50.653 user 0m10.780s 00:11:50.653 sys 0m0.967s 00:11:50.653 11:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.653 11:23:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.911 11:23:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:50.911 11:23:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.911 11:23:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.911 11:23:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.911 ************************************ 00:11:50.911 START TEST raid_read_error_test 00:11:50.911 ************************************ 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I93YX22RPG 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63455 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63455 00:11:50.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63455 ']' 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.911 11:23:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.911 [2024-11-20 11:23:58.632520] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:50.911 [2024-11-20 11:23:58.632863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63455 ] 00:11:51.170 [2024-11-20 11:23:58.803931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.171 [2024-11-20 11:23:58.929358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.429 [2024-11-20 11:23:59.130757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.429 [2024-11-20 11:23:59.130979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.994 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 BaseBdev1_malloc 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 true 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 [2024-11-20 11:23:59.668166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:51.995 [2024-11-20 11:23:59.668458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.995 [2024-11-20 11:23:59.668512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:51.995 [2024-11-20 11:23:59.668531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.995 [2024-11-20 11:23:59.671537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.995 [2024-11-20 11:23:59.671754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.995 BaseBdev1 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 BaseBdev2_malloc 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 true 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 [2024-11-20 11:23:59.736448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.995 [2024-11-20 11:23:59.736560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.995 [2024-11-20 11:23:59.736583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.995 [2024-11-20 11:23:59.736599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.995 [2024-11-20 11:23:59.739348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.995 [2024-11-20 11:23:59.739392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.995 BaseBdev2 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 [2024-11-20 11:23:59.744553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.995 [2024-11-20 11:23:59.747291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.995 [2024-11-20 11:23:59.747743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:51.995 [2024-11-20 11:23:59.747882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.995 [2024-11-20 11:23:59.748229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:51.995 [2024-11-20 11:23:59.748579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:51.995 [2024-11-20 11:23:59.748744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:51.995 [2024-11-20 11:23:59.749182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.995 "name": "raid_bdev1", 00:11:51.995 "uuid": "b649a6dc-b444-43ed-a70b-0555c425ebd2", 00:11:51.995 "strip_size_kb": 0, 00:11:51.995 "state": "online", 00:11:51.995 "raid_level": "raid1", 00:11:51.995 "superblock": true, 00:11:51.995 "num_base_bdevs": 2, 00:11:51.995 "num_base_bdevs_discovered": 2, 00:11:51.995 "num_base_bdevs_operational": 2, 00:11:51.995 "base_bdevs_list": [ 00:11:51.995 { 00:11:51.995 "name": "BaseBdev1", 00:11:51.995 "uuid": "8b12c61f-9e88-5470-83c4-16cf93aaba0b", 00:11:51.995 "is_configured": true, 00:11:51.995 "data_offset": 2048, 00:11:51.995 "data_size": 63488 00:11:51.995 }, 00:11:51.995 { 00:11:51.995 "name": "BaseBdev2", 00:11:51.995 "uuid": "6ae99c45-c32b-50c5-81c8-2ecb54b5cc96", 00:11:51.995 "is_configured": true, 00:11:51.995 "data_offset": 2048, 00:11:51.995 "data_size": 63488 00:11:51.995 } 00:11:51.995 ] 00:11:51.995 }' 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.995 11:23:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.562 11:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.562 11:24:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.562 [2024-11-20 11:24:00.326805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.497 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.498 "name": "raid_bdev1", 00:11:53.498 "uuid": "b649a6dc-b444-43ed-a70b-0555c425ebd2", 00:11:53.498 "strip_size_kb": 0, 00:11:53.498 "state": "online", 00:11:53.498 "raid_level": "raid1", 00:11:53.498 "superblock": true, 00:11:53.498 "num_base_bdevs": 2, 00:11:53.498 "num_base_bdevs_discovered": 2, 00:11:53.498 "num_base_bdevs_operational": 2, 00:11:53.498 "base_bdevs_list": [ 00:11:53.498 { 00:11:53.498 "name": "BaseBdev1", 00:11:53.498 "uuid": "8b12c61f-9e88-5470-83c4-16cf93aaba0b", 00:11:53.498 "is_configured": true, 00:11:53.498 "data_offset": 2048, 00:11:53.498 "data_size": 63488 00:11:53.498 }, 00:11:53.498 { 00:11:53.498 "name": "BaseBdev2", 00:11:53.498 "uuid": "6ae99c45-c32b-50c5-81c8-2ecb54b5cc96", 00:11:53.498 "is_configured": true, 00:11:53.498 "data_offset": 2048, 00:11:53.498 "data_size": 63488 00:11:53.498 } 00:11:53.498 ] 00:11:53.498 }' 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.498 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.063 [2024-11-20 11:24:01.769792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.063 [2024-11-20 11:24:01.770021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.063 [2024-11-20 11:24:01.773388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.063 [2024-11-20 11:24:01.773570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.063 [2024-11-20 11:24:01.773876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.063 [2024-11-20 11:24:01.774044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:54.063 { 00:11:54.063 "results": [ 00:11:54.063 { 00:11:54.063 "job": "raid_bdev1", 00:11:54.063 "core_mask": "0x1", 00:11:54.063 "workload": "randrw", 00:11:54.063 "percentage": 50, 00:11:54.063 "status": "finished", 00:11:54.063 "queue_depth": 1, 00:11:54.063 "io_size": 131072, 00:11:54.063 "runtime": 1.440515, 00:11:54.063 "iops": 12348.361523482921, 00:11:54.063 "mibps": 1543.5451904353652, 00:11:54.063 "io_failed": 0, 00:11:54.063 "io_timeout": 0, 00:11:54.063 "avg_latency_us": 76.85983686652902, 00:11:54.063 "min_latency_us": 38.63272727272727, 00:11:54.063 "max_latency_us": 1891.6072727272726 00:11:54.063 } 00:11:54.063 ], 00:11:54.063 "core_count": 1 00:11:54.063 } 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63455 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63455 ']' 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63455 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63455 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63455' 00:11:54.063 killing process with pid 63455 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63455 00:11:54.063 [2024-11-20 11:24:01.813284] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.063 11:24:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63455 00:11:54.321 [2024-11-20 11:24:01.933354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I93YX22RPG 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:55.257 ************************************ 00:11:55.257 END TEST raid_read_error_test 00:11:55.257 ************************************ 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:55.257 00:11:55.257 real 0m4.490s 00:11:55.257 user 0m5.580s 00:11:55.257 sys 0m0.548s 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.257 11:24:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.257 11:24:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:55.257 11:24:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.257 11:24:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.257 11:24:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.257 ************************************ 00:11:55.257 START TEST raid_write_error_test 00:11:55.257 ************************************ 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:55.257 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GjlQxeW9S3 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63601 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63601 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63601 ']' 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.258 11:24:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.516 [2024-11-20 11:24:03.194149] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:11:55.516 [2024-11-20 11:24:03.194353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63601 ] 00:11:55.823 [2024-11-20 11:24:03.384400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.823 [2024-11-20 11:24:03.511892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.082 [2024-11-20 11:24:03.714161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.082 [2024-11-20 11:24:03.714199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.340 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.341 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.341 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.341 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.341 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.341 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.600 BaseBdev1_malloc 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.600 true 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.600 [2024-11-20 11:24:04.200485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.600 [2024-11-20 11:24:04.200573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.600 [2024-11-20 11:24:04.200603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:56.600 [2024-11-20 11:24:04.200621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.600 [2024-11-20 11:24:04.203430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.600 [2024-11-20 11:24:04.203478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.600 BaseBdev1 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.600 BaseBdev2_malloc 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.600 true 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.600 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 [2024-11-20 11:24:04.256228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.601 [2024-11-20 11:24:04.256299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.601 [2024-11-20 11:24:04.256324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:56.601 [2024-11-20 11:24:04.256341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.601 [2024-11-20 11:24:04.259128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.601 [2024-11-20 11:24:04.259177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.601 BaseBdev2 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 [2024-11-20 11:24:04.264298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.601 [2024-11-20 11:24:04.266903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.601 [2024-11-20 11:24:04.267171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:56.601 [2024-11-20 11:24:04.267195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.601 [2024-11-20 11:24:04.267481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:56.601 [2024-11-20 11:24:04.267891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:56.601 [2024-11-20 11:24:04.268064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:56.601 [2024-11-20 11:24:04.268449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.601 "name": "raid_bdev1", 00:11:56.601 "uuid": "1c1bd013-ab26-4e25-84cf-4e94ed2a882c", 00:11:56.601 "strip_size_kb": 0, 00:11:56.601 "state": "online", 00:11:56.601 "raid_level": "raid1", 00:11:56.601 "superblock": true, 00:11:56.601 "num_base_bdevs": 2, 00:11:56.601 "num_base_bdevs_discovered": 2, 00:11:56.601 "num_base_bdevs_operational": 2, 00:11:56.601 "base_bdevs_list": [ 00:11:56.601 { 00:11:56.601 "name": "BaseBdev1", 00:11:56.601 "uuid": "ec7a0f5f-6614-5ef7-aaf3-7bd6c4fd4a29", 00:11:56.601 "is_configured": true, 00:11:56.601 "data_offset": 2048, 00:11:56.601 "data_size": 63488 00:11:56.601 }, 00:11:56.601 { 00:11:56.601 "name": "BaseBdev2", 00:11:56.601 "uuid": "7c47be3a-c05e-59af-90b4-a1a990cb3129", 00:11:56.601 "is_configured": true, 00:11:56.601 "data_offset": 2048, 00:11:56.601 "data_size": 63488 00:11:56.601 } 00:11:56.601 ] 00:11:56.601 }' 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.601 11:24:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.169 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:57.169 11:24:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:57.169 [2024-11-20 11:24:04.881996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.105 [2024-11-20 11:24:05.766944] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:58.105 [2024-11-20 11:24:05.767036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.105 [2024-11-20 11:24:05.767257] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.105 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.105 "name": "raid_bdev1", 00:11:58.105 "uuid": "1c1bd013-ab26-4e25-84cf-4e94ed2a882c", 00:11:58.105 "strip_size_kb": 0, 00:11:58.105 "state": "online", 00:11:58.106 "raid_level": "raid1", 00:11:58.106 "superblock": true, 00:11:58.106 "num_base_bdevs": 2, 00:11:58.106 "num_base_bdevs_discovered": 1, 00:11:58.106 "num_base_bdevs_operational": 1, 00:11:58.106 "base_bdevs_list": [ 00:11:58.106 { 00:11:58.106 "name": null, 00:11:58.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.106 "is_configured": false, 00:11:58.106 "data_offset": 0, 00:11:58.106 "data_size": 63488 00:11:58.106 }, 00:11:58.106 { 00:11:58.106 "name": "BaseBdev2", 00:11:58.106 "uuid": "7c47be3a-c05e-59af-90b4-a1a990cb3129", 00:11:58.106 "is_configured": true, 00:11:58.106 "data_offset": 2048, 00:11:58.106 "data_size": 63488 00:11:58.106 } 00:11:58.106 ] 00:11:58.106 }' 00:11:58.106 11:24:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.106 11:24:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.674 [2024-11-20 11:24:06.314159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.674 [2024-11-20 11:24:06.314202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.674 [2024-11-20 11:24:06.317767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.674 [2024-11-20 11:24:06.317820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.674 [2024-11-20 11:24:06.317907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.674 [2024-11-20 11:24:06.317923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:58.674 { 00:11:58.674 "results": [ 00:11:58.674 { 00:11:58.674 "job": "raid_bdev1", 00:11:58.674 "core_mask": "0x1", 00:11:58.674 "workload": "randrw", 00:11:58.674 "percentage": 50, 00:11:58.674 "status": "finished", 00:11:58.674 "queue_depth": 1, 00:11:58.674 "io_size": 131072, 00:11:58.674 "runtime": 1.429809, 00:11:58.674 "iops": 15004.101946483761, 00:11:58.674 "mibps": 1875.5127433104701, 00:11:58.674 "io_failed": 0, 00:11:58.674 "io_timeout": 0, 00:11:58.674 "avg_latency_us": 62.4483492454965, 00:11:58.674 "min_latency_us": 38.4, 00:11:58.674 "max_latency_us": 1899.0545454545454 00:11:58.674 } 00:11:58.674 ], 00:11:58.674 "core_count": 1 00:11:58.674 } 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63601 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63601 ']' 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63601 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63601 00:11:58.674 killing process with pid 63601 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63601' 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63601 00:11:58.674 [2024-11-20 11:24:06.360566] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.674 11:24:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63601 00:11:58.674 [2024-11-20 11:24:06.486752] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GjlQxeW9S3 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:00.085 00:12:00.085 real 0m4.542s 00:12:00.085 user 0m5.652s 00:12:00.085 sys 0m0.582s 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.085 ************************************ 00:12:00.085 END TEST raid_write_error_test 00:12:00.085 ************************************ 00:12:00.085 11:24:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 11:24:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:00.085 11:24:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:00.085 11:24:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:00.085 11:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.085 11:24:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.085 11:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 ************************************ 00:12:00.085 START TEST raid_state_function_test 00:12:00.085 ************************************ 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.085 Process raid pid: 63739 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63739 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63739' 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63739 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63739 ']' 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.085 11:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.085 [2024-11-20 11:24:07.792285] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:12:00.086 [2024-11-20 11:24:07.792853] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.345 [2024-11-20 11:24:07.994491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.345 [2024-11-20 11:24:08.167194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.603 [2024-11-20 11:24:08.400042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.603 [2024-11-20 11:24:08.400364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 [2024-11-20 11:24:08.797525] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.171 [2024-11-20 11:24:08.797592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.171 [2024-11-20 11:24:08.797608] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.171 [2024-11-20 11:24:08.797693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.171 [2024-11-20 11:24:08.797705] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.171 [2024-11-20 11:24:08.797721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.171 "name": "Existed_Raid", 00:12:01.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.171 "strip_size_kb": 64, 00:12:01.171 "state": "configuring", 00:12:01.171 "raid_level": "raid0", 00:12:01.171 "superblock": false, 00:12:01.171 "num_base_bdevs": 3, 00:12:01.171 "num_base_bdevs_discovered": 0, 00:12:01.171 "num_base_bdevs_operational": 3, 00:12:01.171 "base_bdevs_list": [ 00:12:01.171 { 00:12:01.171 "name": "BaseBdev1", 00:12:01.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.171 "is_configured": false, 00:12:01.171 "data_offset": 0, 00:12:01.171 "data_size": 0 00:12:01.171 }, 00:12:01.171 { 00:12:01.171 "name": "BaseBdev2", 00:12:01.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.171 "is_configured": false, 00:12:01.171 "data_offset": 0, 00:12:01.171 "data_size": 0 00:12:01.171 }, 00:12:01.171 { 00:12:01.171 "name": "BaseBdev3", 00:12:01.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.171 "is_configured": false, 00:12:01.171 "data_offset": 0, 00:12:01.171 "data_size": 0 00:12:01.171 } 00:12:01.171 ] 00:12:01.171 }' 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.171 11:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.739 [2024-11-20 11:24:09.325602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.739 [2024-11-20 11:24:09.325662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.739 [2024-11-20 11:24:09.333591] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.739 [2024-11-20 11:24:09.333817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.739 [2024-11-20 11:24:09.333842] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.739 [2024-11-20 11:24:09.333860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.739 [2024-11-20 11:24:09.333870] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.739 [2024-11-20 11:24:09.333885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.739 [2024-11-20 11:24:09.378781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.739 BaseBdev1 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.739 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.739 [ 00:12:01.739 { 00:12:01.739 "name": "BaseBdev1", 00:12:01.739 "aliases": [ 00:12:01.739 "3c54fe16-27ad-426a-b6fb-a7f1ce8a39bc" 00:12:01.739 ], 00:12:01.739 "product_name": "Malloc disk", 00:12:01.739 "block_size": 512, 00:12:01.739 "num_blocks": 65536, 00:12:01.739 "uuid": "3c54fe16-27ad-426a-b6fb-a7f1ce8a39bc", 00:12:01.739 "assigned_rate_limits": { 00:12:01.739 "rw_ios_per_sec": 0, 00:12:01.739 "rw_mbytes_per_sec": 0, 00:12:01.739 "r_mbytes_per_sec": 0, 00:12:01.739 "w_mbytes_per_sec": 0 00:12:01.739 }, 00:12:01.739 "claimed": true, 00:12:01.739 "claim_type": "exclusive_write", 00:12:01.739 "zoned": false, 00:12:01.739 "supported_io_types": { 00:12:01.739 "read": true, 00:12:01.739 "write": true, 00:12:01.739 "unmap": true, 00:12:01.739 "flush": true, 00:12:01.739 "reset": true, 00:12:01.739 "nvme_admin": false, 00:12:01.739 "nvme_io": false, 00:12:01.739 "nvme_io_md": false, 00:12:01.740 "write_zeroes": true, 00:12:01.740 "zcopy": true, 00:12:01.740 "get_zone_info": false, 00:12:01.740 "zone_management": false, 00:12:01.740 "zone_append": false, 00:12:01.740 "compare": false, 00:12:01.740 "compare_and_write": false, 00:12:01.740 "abort": true, 00:12:01.740 "seek_hole": false, 00:12:01.740 "seek_data": false, 00:12:01.740 "copy": true, 00:12:01.740 "nvme_iov_md": false 00:12:01.740 }, 00:12:01.740 "memory_domains": [ 00:12:01.740 { 00:12:01.740 "dma_device_id": "system", 00:12:01.740 "dma_device_type": 1 00:12:01.740 }, 00:12:01.740 { 00:12:01.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.740 "dma_device_type": 2 00:12:01.740 } 00:12:01.740 ], 00:12:01.740 "driver_specific": {} 00:12:01.740 } 00:12:01.740 ] 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.740 "name": "Existed_Raid", 00:12:01.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.740 "strip_size_kb": 64, 00:12:01.740 "state": "configuring", 00:12:01.740 "raid_level": "raid0", 00:12:01.740 "superblock": false, 00:12:01.740 "num_base_bdevs": 3, 00:12:01.740 "num_base_bdevs_discovered": 1, 00:12:01.740 "num_base_bdevs_operational": 3, 00:12:01.740 "base_bdevs_list": [ 00:12:01.740 { 00:12:01.740 "name": "BaseBdev1", 00:12:01.740 "uuid": "3c54fe16-27ad-426a-b6fb-a7f1ce8a39bc", 00:12:01.740 "is_configured": true, 00:12:01.740 "data_offset": 0, 00:12:01.740 "data_size": 65536 00:12:01.740 }, 00:12:01.740 { 00:12:01.740 "name": "BaseBdev2", 00:12:01.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.740 "is_configured": false, 00:12:01.740 "data_offset": 0, 00:12:01.740 "data_size": 0 00:12:01.740 }, 00:12:01.740 { 00:12:01.740 "name": "BaseBdev3", 00:12:01.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.740 "is_configured": false, 00:12:01.740 "data_offset": 0, 00:12:01.740 "data_size": 0 00:12:01.740 } 00:12:01.740 ] 00:12:01.740 }' 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.740 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.308 [2024-11-20 11:24:09.910974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.308 [2024-11-20 11:24:09.911052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.308 [2024-11-20 11:24:09.919026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.308 [2024-11-20 11:24:09.921541] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.308 [2024-11-20 11:24:09.921609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.308 [2024-11-20 11:24:09.921626] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.308 [2024-11-20 11:24:09.921664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.308 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.308 "name": "Existed_Raid", 00:12:02.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.308 "strip_size_kb": 64, 00:12:02.308 "state": "configuring", 00:12:02.308 "raid_level": "raid0", 00:12:02.308 "superblock": false, 00:12:02.308 "num_base_bdevs": 3, 00:12:02.308 "num_base_bdevs_discovered": 1, 00:12:02.308 "num_base_bdevs_operational": 3, 00:12:02.308 "base_bdevs_list": [ 00:12:02.309 { 00:12:02.309 "name": "BaseBdev1", 00:12:02.309 "uuid": "3c54fe16-27ad-426a-b6fb-a7f1ce8a39bc", 00:12:02.309 "is_configured": true, 00:12:02.309 "data_offset": 0, 00:12:02.309 "data_size": 65536 00:12:02.309 }, 00:12:02.309 { 00:12:02.309 "name": "BaseBdev2", 00:12:02.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.309 "is_configured": false, 00:12:02.309 "data_offset": 0, 00:12:02.309 "data_size": 0 00:12:02.309 }, 00:12:02.309 { 00:12:02.309 "name": "BaseBdev3", 00:12:02.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.309 "is_configured": false, 00:12:02.309 "data_offset": 0, 00:12:02.309 "data_size": 0 00:12:02.309 } 00:12:02.309 ] 00:12:02.309 }' 00:12:02.309 11:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.309 11:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.877 [2024-11-20 11:24:10.458069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.877 BaseBdev2 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.877 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.878 [ 00:12:02.878 { 00:12:02.878 "name": "BaseBdev2", 00:12:02.878 "aliases": [ 00:12:02.878 "09ebcec3-7c94-405d-8ab2-0c60c5bc1663" 00:12:02.878 ], 00:12:02.878 "product_name": "Malloc disk", 00:12:02.878 "block_size": 512, 00:12:02.878 "num_blocks": 65536, 00:12:02.878 "uuid": "09ebcec3-7c94-405d-8ab2-0c60c5bc1663", 00:12:02.878 "assigned_rate_limits": { 00:12:02.878 "rw_ios_per_sec": 0, 00:12:02.878 "rw_mbytes_per_sec": 0, 00:12:02.878 "r_mbytes_per_sec": 0, 00:12:02.878 "w_mbytes_per_sec": 0 00:12:02.878 }, 00:12:02.878 "claimed": true, 00:12:02.878 "claim_type": "exclusive_write", 00:12:02.878 "zoned": false, 00:12:02.878 "supported_io_types": { 00:12:02.878 "read": true, 00:12:02.878 "write": true, 00:12:02.878 "unmap": true, 00:12:02.878 "flush": true, 00:12:02.878 "reset": true, 00:12:02.878 "nvme_admin": false, 00:12:02.878 "nvme_io": false, 00:12:02.878 "nvme_io_md": false, 00:12:02.878 "write_zeroes": true, 00:12:02.878 "zcopy": true, 00:12:02.878 "get_zone_info": false, 00:12:02.878 "zone_management": false, 00:12:02.878 "zone_append": false, 00:12:02.878 "compare": false, 00:12:02.878 "compare_and_write": false, 00:12:02.878 "abort": true, 00:12:02.878 "seek_hole": false, 00:12:02.878 "seek_data": false, 00:12:02.878 "copy": true, 00:12:02.878 "nvme_iov_md": false 00:12:02.878 }, 00:12:02.878 "memory_domains": [ 00:12:02.878 { 00:12:02.878 "dma_device_id": "system", 00:12:02.878 "dma_device_type": 1 00:12:02.878 }, 00:12:02.878 { 00:12:02.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.878 "dma_device_type": 2 00:12:02.878 } 00:12:02.878 ], 00:12:02.878 "driver_specific": {} 00:12:02.878 } 00:12:02.878 ] 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.878 "name": "Existed_Raid", 00:12:02.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.878 "strip_size_kb": 64, 00:12:02.878 "state": "configuring", 00:12:02.878 "raid_level": "raid0", 00:12:02.878 "superblock": false, 00:12:02.878 "num_base_bdevs": 3, 00:12:02.878 "num_base_bdevs_discovered": 2, 00:12:02.878 "num_base_bdevs_operational": 3, 00:12:02.878 "base_bdevs_list": [ 00:12:02.878 { 00:12:02.878 "name": "BaseBdev1", 00:12:02.878 "uuid": "3c54fe16-27ad-426a-b6fb-a7f1ce8a39bc", 00:12:02.878 "is_configured": true, 00:12:02.878 "data_offset": 0, 00:12:02.878 "data_size": 65536 00:12:02.878 }, 00:12:02.878 { 00:12:02.878 "name": "BaseBdev2", 00:12:02.878 "uuid": "09ebcec3-7c94-405d-8ab2-0c60c5bc1663", 00:12:02.878 "is_configured": true, 00:12:02.878 "data_offset": 0, 00:12:02.878 "data_size": 65536 00:12:02.878 }, 00:12:02.878 { 00:12:02.878 "name": "BaseBdev3", 00:12:02.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.878 "is_configured": false, 00:12:02.878 "data_offset": 0, 00:12:02.878 "data_size": 0 00:12:02.878 } 00:12:02.878 ] 00:12:02.878 }' 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.878 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.448 11:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.448 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.448 11:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.448 [2024-11-20 11:24:11.035968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.448 [2024-11-20 11:24:11.036023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.448 [2024-11-20 11:24:11.036044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:03.448 [2024-11-20 11:24:11.036408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:03.448 [2024-11-20 11:24:11.036661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.448 [2024-11-20 11:24:11.036678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:03.448 [2024-11-20 11:24:11.037000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.448 BaseBdev3 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.448 [ 00:12:03.448 { 00:12:03.448 "name": "BaseBdev3", 00:12:03.448 "aliases": [ 00:12:03.448 "f1b11dc3-4285-4890-90c0-eb57b9a7fe13" 00:12:03.448 ], 00:12:03.448 "product_name": "Malloc disk", 00:12:03.448 "block_size": 512, 00:12:03.448 "num_blocks": 65536, 00:12:03.448 "uuid": "f1b11dc3-4285-4890-90c0-eb57b9a7fe13", 00:12:03.448 "assigned_rate_limits": { 00:12:03.448 "rw_ios_per_sec": 0, 00:12:03.448 "rw_mbytes_per_sec": 0, 00:12:03.448 "r_mbytes_per_sec": 0, 00:12:03.448 "w_mbytes_per_sec": 0 00:12:03.448 }, 00:12:03.448 "claimed": true, 00:12:03.448 "claim_type": "exclusive_write", 00:12:03.448 "zoned": false, 00:12:03.448 "supported_io_types": { 00:12:03.448 "read": true, 00:12:03.448 "write": true, 00:12:03.448 "unmap": true, 00:12:03.448 "flush": true, 00:12:03.448 "reset": true, 00:12:03.448 "nvme_admin": false, 00:12:03.448 "nvme_io": false, 00:12:03.448 "nvme_io_md": false, 00:12:03.448 "write_zeroes": true, 00:12:03.448 "zcopy": true, 00:12:03.448 "get_zone_info": false, 00:12:03.448 "zone_management": false, 00:12:03.448 "zone_append": false, 00:12:03.448 "compare": false, 00:12:03.448 "compare_and_write": false, 00:12:03.448 "abort": true, 00:12:03.448 "seek_hole": false, 00:12:03.448 "seek_data": false, 00:12:03.448 "copy": true, 00:12:03.448 "nvme_iov_md": false 00:12:03.448 }, 00:12:03.448 "memory_domains": [ 00:12:03.448 { 00:12:03.448 "dma_device_id": "system", 00:12:03.448 "dma_device_type": 1 00:12:03.448 }, 00:12:03.448 { 00:12:03.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.448 "dma_device_type": 2 00:12:03.448 } 00:12:03.448 ], 00:12:03.448 "driver_specific": {} 00:12:03.448 } 00:12:03.448 ] 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.448 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.449 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.449 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.449 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.449 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.449 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.449 "name": "Existed_Raid", 00:12:03.449 "uuid": "2ecb7e21-8fc1-417e-921e-80054f51a2d9", 00:12:03.449 "strip_size_kb": 64, 00:12:03.449 "state": "online", 00:12:03.449 "raid_level": "raid0", 00:12:03.449 "superblock": false, 00:12:03.449 "num_base_bdevs": 3, 00:12:03.449 "num_base_bdevs_discovered": 3, 00:12:03.449 "num_base_bdevs_operational": 3, 00:12:03.449 "base_bdevs_list": [ 00:12:03.449 { 00:12:03.449 "name": "BaseBdev1", 00:12:03.449 "uuid": "3c54fe16-27ad-426a-b6fb-a7f1ce8a39bc", 00:12:03.449 "is_configured": true, 00:12:03.449 "data_offset": 0, 00:12:03.449 "data_size": 65536 00:12:03.449 }, 00:12:03.449 { 00:12:03.449 "name": "BaseBdev2", 00:12:03.449 "uuid": "09ebcec3-7c94-405d-8ab2-0c60c5bc1663", 00:12:03.449 "is_configured": true, 00:12:03.449 "data_offset": 0, 00:12:03.449 "data_size": 65536 00:12:03.449 }, 00:12:03.449 { 00:12:03.449 "name": "BaseBdev3", 00:12:03.449 "uuid": "f1b11dc3-4285-4890-90c0-eb57b9a7fe13", 00:12:03.449 "is_configured": true, 00:12:03.449 "data_offset": 0, 00:12:03.449 "data_size": 65536 00:12:03.449 } 00:12:03.449 ] 00:12:03.449 }' 00:12:03.449 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.449 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.016 [2024-11-20 11:24:11.596758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:04.016 "name": "Existed_Raid", 00:12:04.016 "aliases": [ 00:12:04.016 "2ecb7e21-8fc1-417e-921e-80054f51a2d9" 00:12:04.016 ], 00:12:04.016 "product_name": "Raid Volume", 00:12:04.016 "block_size": 512, 00:12:04.016 "num_blocks": 196608, 00:12:04.016 "uuid": "2ecb7e21-8fc1-417e-921e-80054f51a2d9", 00:12:04.016 "assigned_rate_limits": { 00:12:04.016 "rw_ios_per_sec": 0, 00:12:04.016 "rw_mbytes_per_sec": 0, 00:12:04.016 "r_mbytes_per_sec": 0, 00:12:04.016 "w_mbytes_per_sec": 0 00:12:04.016 }, 00:12:04.016 "claimed": false, 00:12:04.016 "zoned": false, 00:12:04.016 "supported_io_types": { 00:12:04.016 "read": true, 00:12:04.016 "write": true, 00:12:04.016 "unmap": true, 00:12:04.016 "flush": true, 00:12:04.016 "reset": true, 00:12:04.016 "nvme_admin": false, 00:12:04.016 "nvme_io": false, 00:12:04.016 "nvme_io_md": false, 00:12:04.016 "write_zeroes": true, 00:12:04.016 "zcopy": false, 00:12:04.016 "get_zone_info": false, 00:12:04.016 "zone_management": false, 00:12:04.016 "zone_append": false, 00:12:04.016 "compare": false, 00:12:04.016 "compare_and_write": false, 00:12:04.016 "abort": false, 00:12:04.016 "seek_hole": false, 00:12:04.016 "seek_data": false, 00:12:04.016 "copy": false, 00:12:04.016 "nvme_iov_md": false 00:12:04.016 }, 00:12:04.016 "memory_domains": [ 00:12:04.016 { 00:12:04.016 "dma_device_id": "system", 00:12:04.016 "dma_device_type": 1 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.016 "dma_device_type": 2 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "dma_device_id": "system", 00:12:04.016 "dma_device_type": 1 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.016 "dma_device_type": 2 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "dma_device_id": "system", 00:12:04.016 "dma_device_type": 1 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.016 "dma_device_type": 2 00:12:04.016 } 00:12:04.016 ], 00:12:04.016 "driver_specific": { 00:12:04.016 "raid": { 00:12:04.016 "uuid": "2ecb7e21-8fc1-417e-921e-80054f51a2d9", 00:12:04.016 "strip_size_kb": 64, 00:12:04.016 "state": "online", 00:12:04.016 "raid_level": "raid0", 00:12:04.016 "superblock": false, 00:12:04.016 "num_base_bdevs": 3, 00:12:04.016 "num_base_bdevs_discovered": 3, 00:12:04.016 "num_base_bdevs_operational": 3, 00:12:04.016 "base_bdevs_list": [ 00:12:04.016 { 00:12:04.016 "name": "BaseBdev1", 00:12:04.016 "uuid": "3c54fe16-27ad-426a-b6fb-a7f1ce8a39bc", 00:12:04.016 "is_configured": true, 00:12:04.016 "data_offset": 0, 00:12:04.016 "data_size": 65536 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "name": "BaseBdev2", 00:12:04.016 "uuid": "09ebcec3-7c94-405d-8ab2-0c60c5bc1663", 00:12:04.016 "is_configured": true, 00:12:04.016 "data_offset": 0, 00:12:04.016 "data_size": 65536 00:12:04.016 }, 00:12:04.016 { 00:12:04.016 "name": "BaseBdev3", 00:12:04.016 "uuid": "f1b11dc3-4285-4890-90c0-eb57b9a7fe13", 00:12:04.016 "is_configured": true, 00:12:04.016 "data_offset": 0, 00:12:04.016 "data_size": 65536 00:12:04.016 } 00:12:04.016 ] 00:12:04.016 } 00:12:04.016 } 00:12:04.016 }' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:04.016 BaseBdev2 00:12:04.016 BaseBdev3' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.016 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.017 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.017 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.277 [2024-11-20 11:24:11.904441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.277 [2024-11-20 11:24:11.904474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.277 [2024-11-20 11:24:11.904540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.277 11:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.277 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.277 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.277 "name": "Existed_Raid", 00:12:04.277 "uuid": "2ecb7e21-8fc1-417e-921e-80054f51a2d9", 00:12:04.277 "strip_size_kb": 64, 00:12:04.277 "state": "offline", 00:12:04.277 "raid_level": "raid0", 00:12:04.277 "superblock": false, 00:12:04.277 "num_base_bdevs": 3, 00:12:04.277 "num_base_bdevs_discovered": 2, 00:12:04.277 "num_base_bdevs_operational": 2, 00:12:04.277 "base_bdevs_list": [ 00:12:04.277 { 00:12:04.277 "name": null, 00:12:04.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.277 "is_configured": false, 00:12:04.277 "data_offset": 0, 00:12:04.277 "data_size": 65536 00:12:04.277 }, 00:12:04.277 { 00:12:04.277 "name": "BaseBdev2", 00:12:04.277 "uuid": "09ebcec3-7c94-405d-8ab2-0c60c5bc1663", 00:12:04.277 "is_configured": true, 00:12:04.277 "data_offset": 0, 00:12:04.277 "data_size": 65536 00:12:04.277 }, 00:12:04.277 { 00:12:04.277 "name": "BaseBdev3", 00:12:04.277 "uuid": "f1b11dc3-4285-4890-90c0-eb57b9a7fe13", 00:12:04.277 "is_configured": true, 00:12:04.277 "data_offset": 0, 00:12:04.277 "data_size": 65536 00:12:04.277 } 00:12:04.277 ] 00:12:04.277 }' 00:12:04.277 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.277 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.851 [2024-11-20 11:24:12.568867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.851 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.111 [2024-11-20 11:24:12.701407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.111 [2024-11-20 11:24:12.701605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.111 BaseBdev2 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.111 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.112 [ 00:12:05.112 { 00:12:05.112 "name": "BaseBdev2", 00:12:05.112 "aliases": [ 00:12:05.112 "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c" 00:12:05.112 ], 00:12:05.112 "product_name": "Malloc disk", 00:12:05.112 "block_size": 512, 00:12:05.112 "num_blocks": 65536, 00:12:05.112 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:05.112 "assigned_rate_limits": { 00:12:05.112 "rw_ios_per_sec": 0, 00:12:05.112 "rw_mbytes_per_sec": 0, 00:12:05.112 "r_mbytes_per_sec": 0, 00:12:05.112 "w_mbytes_per_sec": 0 00:12:05.112 }, 00:12:05.112 "claimed": false, 00:12:05.112 "zoned": false, 00:12:05.112 "supported_io_types": { 00:12:05.112 "read": true, 00:12:05.112 "write": true, 00:12:05.112 "unmap": true, 00:12:05.112 "flush": true, 00:12:05.112 "reset": true, 00:12:05.112 "nvme_admin": false, 00:12:05.112 "nvme_io": false, 00:12:05.112 "nvme_io_md": false, 00:12:05.112 "write_zeroes": true, 00:12:05.112 "zcopy": true, 00:12:05.112 "get_zone_info": false, 00:12:05.112 "zone_management": false, 00:12:05.112 "zone_append": false, 00:12:05.112 "compare": false, 00:12:05.112 "compare_and_write": false, 00:12:05.112 "abort": true, 00:12:05.112 "seek_hole": false, 00:12:05.112 "seek_data": false, 00:12:05.112 "copy": true, 00:12:05.112 "nvme_iov_md": false 00:12:05.112 }, 00:12:05.112 "memory_domains": [ 00:12:05.112 { 00:12:05.112 "dma_device_id": "system", 00:12:05.112 "dma_device_type": 1 00:12:05.112 }, 00:12:05.112 { 00:12:05.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.112 "dma_device_type": 2 00:12:05.112 } 00:12:05.112 ], 00:12:05.112 "driver_specific": {} 00:12:05.112 } 00:12:05.112 ] 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.112 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.371 BaseBdev3 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 [ 00:12:05.372 { 00:12:05.372 "name": "BaseBdev3", 00:12:05.372 "aliases": [ 00:12:05.372 "bc09d78b-9aad-4b56-ad3a-54384ddb0444" 00:12:05.372 ], 00:12:05.372 "product_name": "Malloc disk", 00:12:05.372 "block_size": 512, 00:12:05.372 "num_blocks": 65536, 00:12:05.372 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:05.372 "assigned_rate_limits": { 00:12:05.372 "rw_ios_per_sec": 0, 00:12:05.372 "rw_mbytes_per_sec": 0, 00:12:05.372 "r_mbytes_per_sec": 0, 00:12:05.372 "w_mbytes_per_sec": 0 00:12:05.372 }, 00:12:05.372 "claimed": false, 00:12:05.372 "zoned": false, 00:12:05.372 "supported_io_types": { 00:12:05.372 "read": true, 00:12:05.372 "write": true, 00:12:05.372 "unmap": true, 00:12:05.372 "flush": true, 00:12:05.372 "reset": true, 00:12:05.372 "nvme_admin": false, 00:12:05.372 "nvme_io": false, 00:12:05.372 "nvme_io_md": false, 00:12:05.372 "write_zeroes": true, 00:12:05.372 "zcopy": true, 00:12:05.372 "get_zone_info": false, 00:12:05.372 "zone_management": false, 00:12:05.372 "zone_append": false, 00:12:05.372 "compare": false, 00:12:05.372 "compare_and_write": false, 00:12:05.372 "abort": true, 00:12:05.372 "seek_hole": false, 00:12:05.372 "seek_data": false, 00:12:05.372 "copy": true, 00:12:05.372 "nvme_iov_md": false 00:12:05.372 }, 00:12:05.372 "memory_domains": [ 00:12:05.372 { 00:12:05.372 "dma_device_id": "system", 00:12:05.372 "dma_device_type": 1 00:12:05.372 }, 00:12:05.372 { 00:12:05.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.372 "dma_device_type": 2 00:12:05.372 } 00:12:05.372 ], 00:12:05.372 "driver_specific": {} 00:12:05.372 } 00:12:05.372 ] 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.372 11:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 [2024-11-20 11:24:12.997900] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.372 [2024-11-20 11:24:12.997953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.372 [2024-11-20 11:24:12.997984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.372 [2024-11-20 11:24:13.000306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.372 "name": "Existed_Raid", 00:12:05.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.372 "strip_size_kb": 64, 00:12:05.372 "state": "configuring", 00:12:05.372 "raid_level": "raid0", 00:12:05.372 "superblock": false, 00:12:05.372 "num_base_bdevs": 3, 00:12:05.372 "num_base_bdevs_discovered": 2, 00:12:05.372 "num_base_bdevs_operational": 3, 00:12:05.372 "base_bdevs_list": [ 00:12:05.372 { 00:12:05.372 "name": "BaseBdev1", 00:12:05.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.372 "is_configured": false, 00:12:05.372 "data_offset": 0, 00:12:05.372 "data_size": 0 00:12:05.372 }, 00:12:05.372 { 00:12:05.372 "name": "BaseBdev2", 00:12:05.372 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:05.372 "is_configured": true, 00:12:05.372 "data_offset": 0, 00:12:05.372 "data_size": 65536 00:12:05.372 }, 00:12:05.372 { 00:12:05.372 "name": "BaseBdev3", 00:12:05.372 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:05.372 "is_configured": true, 00:12:05.372 "data_offset": 0, 00:12:05.372 "data_size": 65536 00:12:05.372 } 00:12:05.372 ] 00:12:05.372 }' 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.372 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.941 [2024-11-20 11:24:13.502112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.941 "name": "Existed_Raid", 00:12:05.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.941 "strip_size_kb": 64, 00:12:05.941 "state": "configuring", 00:12:05.941 "raid_level": "raid0", 00:12:05.941 "superblock": false, 00:12:05.941 "num_base_bdevs": 3, 00:12:05.941 "num_base_bdevs_discovered": 1, 00:12:05.941 "num_base_bdevs_operational": 3, 00:12:05.941 "base_bdevs_list": [ 00:12:05.941 { 00:12:05.941 "name": "BaseBdev1", 00:12:05.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.941 "is_configured": false, 00:12:05.941 "data_offset": 0, 00:12:05.941 "data_size": 0 00:12:05.941 }, 00:12:05.941 { 00:12:05.941 "name": null, 00:12:05.941 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:05.941 "is_configured": false, 00:12:05.941 "data_offset": 0, 00:12:05.941 "data_size": 65536 00:12:05.941 }, 00:12:05.941 { 00:12:05.941 "name": "BaseBdev3", 00:12:05.941 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:05.941 "is_configured": true, 00:12:05.941 "data_offset": 0, 00:12:05.941 "data_size": 65536 00:12:05.941 } 00:12:05.941 ] 00:12:05.941 }' 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.941 11:24:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.200 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.200 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.200 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.200 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.200 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.459 [2024-11-20 11:24:14.092634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.459 BaseBdev1 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.459 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.460 [ 00:12:06.460 { 00:12:06.460 "name": "BaseBdev1", 00:12:06.460 "aliases": [ 00:12:06.460 "b5458d89-5df8-4d82-96b9-dae961244e35" 00:12:06.460 ], 00:12:06.460 "product_name": "Malloc disk", 00:12:06.460 "block_size": 512, 00:12:06.460 "num_blocks": 65536, 00:12:06.460 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:06.460 "assigned_rate_limits": { 00:12:06.460 "rw_ios_per_sec": 0, 00:12:06.460 "rw_mbytes_per_sec": 0, 00:12:06.460 "r_mbytes_per_sec": 0, 00:12:06.460 "w_mbytes_per_sec": 0 00:12:06.460 }, 00:12:06.460 "claimed": true, 00:12:06.460 "claim_type": "exclusive_write", 00:12:06.460 "zoned": false, 00:12:06.460 "supported_io_types": { 00:12:06.460 "read": true, 00:12:06.460 "write": true, 00:12:06.460 "unmap": true, 00:12:06.460 "flush": true, 00:12:06.460 "reset": true, 00:12:06.460 "nvme_admin": false, 00:12:06.460 "nvme_io": false, 00:12:06.460 "nvme_io_md": false, 00:12:06.460 "write_zeroes": true, 00:12:06.460 "zcopy": true, 00:12:06.460 "get_zone_info": false, 00:12:06.460 "zone_management": false, 00:12:06.460 "zone_append": false, 00:12:06.460 "compare": false, 00:12:06.460 "compare_and_write": false, 00:12:06.460 "abort": true, 00:12:06.460 "seek_hole": false, 00:12:06.460 "seek_data": false, 00:12:06.460 "copy": true, 00:12:06.460 "nvme_iov_md": false 00:12:06.460 }, 00:12:06.460 "memory_domains": [ 00:12:06.460 { 00:12:06.460 "dma_device_id": "system", 00:12:06.460 "dma_device_type": 1 00:12:06.460 }, 00:12:06.460 { 00:12:06.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.460 "dma_device_type": 2 00:12:06.460 } 00:12:06.460 ], 00:12:06.460 "driver_specific": {} 00:12:06.460 } 00:12:06.460 ] 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.460 "name": "Existed_Raid", 00:12:06.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.460 "strip_size_kb": 64, 00:12:06.460 "state": "configuring", 00:12:06.460 "raid_level": "raid0", 00:12:06.460 "superblock": false, 00:12:06.460 "num_base_bdevs": 3, 00:12:06.460 "num_base_bdevs_discovered": 2, 00:12:06.460 "num_base_bdevs_operational": 3, 00:12:06.460 "base_bdevs_list": [ 00:12:06.460 { 00:12:06.460 "name": "BaseBdev1", 00:12:06.460 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:06.460 "is_configured": true, 00:12:06.460 "data_offset": 0, 00:12:06.460 "data_size": 65536 00:12:06.460 }, 00:12:06.460 { 00:12:06.460 "name": null, 00:12:06.460 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:06.460 "is_configured": false, 00:12:06.460 "data_offset": 0, 00:12:06.460 "data_size": 65536 00:12:06.460 }, 00:12:06.460 { 00:12:06.460 "name": "BaseBdev3", 00:12:06.460 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:06.460 "is_configured": true, 00:12:06.460 "data_offset": 0, 00:12:06.460 "data_size": 65536 00:12:06.460 } 00:12:06.460 ] 00:12:06.460 }' 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.460 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.027 [2024-11-20 11:24:14.688830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.027 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.028 "name": "Existed_Raid", 00:12:07.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.028 "strip_size_kb": 64, 00:12:07.028 "state": "configuring", 00:12:07.028 "raid_level": "raid0", 00:12:07.028 "superblock": false, 00:12:07.028 "num_base_bdevs": 3, 00:12:07.028 "num_base_bdevs_discovered": 1, 00:12:07.028 "num_base_bdevs_operational": 3, 00:12:07.028 "base_bdevs_list": [ 00:12:07.028 { 00:12:07.028 "name": "BaseBdev1", 00:12:07.028 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:07.028 "is_configured": true, 00:12:07.028 "data_offset": 0, 00:12:07.028 "data_size": 65536 00:12:07.028 }, 00:12:07.028 { 00:12:07.028 "name": null, 00:12:07.028 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:07.028 "is_configured": false, 00:12:07.028 "data_offset": 0, 00:12:07.028 "data_size": 65536 00:12:07.028 }, 00:12:07.028 { 00:12:07.028 "name": null, 00:12:07.028 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:07.028 "is_configured": false, 00:12:07.028 "data_offset": 0, 00:12:07.028 "data_size": 65536 00:12:07.028 } 00:12:07.028 ] 00:12:07.028 }' 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.028 11:24:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.595 [2024-11-20 11:24:15.269030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.595 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.596 "name": "Existed_Raid", 00:12:07.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.596 "strip_size_kb": 64, 00:12:07.596 "state": "configuring", 00:12:07.596 "raid_level": "raid0", 00:12:07.596 "superblock": false, 00:12:07.596 "num_base_bdevs": 3, 00:12:07.596 "num_base_bdevs_discovered": 2, 00:12:07.596 "num_base_bdevs_operational": 3, 00:12:07.596 "base_bdevs_list": [ 00:12:07.596 { 00:12:07.596 "name": "BaseBdev1", 00:12:07.596 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:07.596 "is_configured": true, 00:12:07.596 "data_offset": 0, 00:12:07.596 "data_size": 65536 00:12:07.596 }, 00:12:07.596 { 00:12:07.596 "name": null, 00:12:07.596 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:07.596 "is_configured": false, 00:12:07.596 "data_offset": 0, 00:12:07.596 "data_size": 65536 00:12:07.596 }, 00:12:07.596 { 00:12:07.596 "name": "BaseBdev3", 00:12:07.596 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:07.596 "is_configured": true, 00:12:07.596 "data_offset": 0, 00:12:07.596 "data_size": 65536 00:12:07.596 } 00:12:07.596 ] 00:12:07.596 }' 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.596 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.163 [2024-11-20 11:24:15.861176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.163 11:24:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.163 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.163 "name": "Existed_Raid", 00:12:08.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.163 "strip_size_kb": 64, 00:12:08.163 "state": "configuring", 00:12:08.163 "raid_level": "raid0", 00:12:08.163 "superblock": false, 00:12:08.163 "num_base_bdevs": 3, 00:12:08.163 "num_base_bdevs_discovered": 1, 00:12:08.163 "num_base_bdevs_operational": 3, 00:12:08.163 "base_bdevs_list": [ 00:12:08.163 { 00:12:08.163 "name": null, 00:12:08.163 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:08.163 "is_configured": false, 00:12:08.163 "data_offset": 0, 00:12:08.163 "data_size": 65536 00:12:08.163 }, 00:12:08.163 { 00:12:08.163 "name": null, 00:12:08.163 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:08.163 "is_configured": false, 00:12:08.163 "data_offset": 0, 00:12:08.163 "data_size": 65536 00:12:08.163 }, 00:12:08.163 { 00:12:08.163 "name": "BaseBdev3", 00:12:08.163 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:08.163 "is_configured": true, 00:12:08.163 "data_offset": 0, 00:12:08.163 "data_size": 65536 00:12:08.163 } 00:12:08.163 ] 00:12:08.163 }' 00:12:08.163 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.163 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 [2024-11-20 11:24:16.522963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.731 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.990 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.990 "name": "Existed_Raid", 00:12:08.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.990 "strip_size_kb": 64, 00:12:08.990 "state": "configuring", 00:12:08.990 "raid_level": "raid0", 00:12:08.990 "superblock": false, 00:12:08.990 "num_base_bdevs": 3, 00:12:08.990 "num_base_bdevs_discovered": 2, 00:12:08.990 "num_base_bdevs_operational": 3, 00:12:08.990 "base_bdevs_list": [ 00:12:08.990 { 00:12:08.990 "name": null, 00:12:08.990 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:08.990 "is_configured": false, 00:12:08.990 "data_offset": 0, 00:12:08.990 "data_size": 65536 00:12:08.990 }, 00:12:08.990 { 00:12:08.990 "name": "BaseBdev2", 00:12:08.990 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:08.990 "is_configured": true, 00:12:08.990 "data_offset": 0, 00:12:08.990 "data_size": 65536 00:12:08.990 }, 00:12:08.990 { 00:12:08.990 "name": "BaseBdev3", 00:12:08.990 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:08.990 "is_configured": true, 00:12:08.990 "data_offset": 0, 00:12:08.990 "data_size": 65536 00:12:08.990 } 00:12:08.990 ] 00:12:08.990 }' 00:12:08.990 11:24:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.990 11:24:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.249 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.249 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.249 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.249 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.249 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b5458d89-5df8-4d82-96b9-dae961244e35 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.508 [2024-11-20 11:24:17.196816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:09.508 [2024-11-20 11:24:17.196869] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.508 [2024-11-20 11:24:17.196884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:09.508 [2024-11-20 11:24:17.197195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:09.508 [2024-11-20 11:24:17.197383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.508 [2024-11-20 11:24:17.197399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:09.508 [2024-11-20 11:24:17.197741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.508 NewBaseBdev 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.508 [ 00:12:09.508 { 00:12:09.508 "name": "NewBaseBdev", 00:12:09.508 "aliases": [ 00:12:09.508 "b5458d89-5df8-4d82-96b9-dae961244e35" 00:12:09.508 ], 00:12:09.508 "product_name": "Malloc disk", 00:12:09.508 "block_size": 512, 00:12:09.508 "num_blocks": 65536, 00:12:09.508 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:09.508 "assigned_rate_limits": { 00:12:09.508 "rw_ios_per_sec": 0, 00:12:09.508 "rw_mbytes_per_sec": 0, 00:12:09.508 "r_mbytes_per_sec": 0, 00:12:09.508 "w_mbytes_per_sec": 0 00:12:09.508 }, 00:12:09.508 "claimed": true, 00:12:09.508 "claim_type": "exclusive_write", 00:12:09.508 "zoned": false, 00:12:09.508 "supported_io_types": { 00:12:09.508 "read": true, 00:12:09.508 "write": true, 00:12:09.508 "unmap": true, 00:12:09.508 "flush": true, 00:12:09.508 "reset": true, 00:12:09.508 "nvme_admin": false, 00:12:09.508 "nvme_io": false, 00:12:09.508 "nvme_io_md": false, 00:12:09.508 "write_zeroes": true, 00:12:09.508 "zcopy": true, 00:12:09.508 "get_zone_info": false, 00:12:09.508 "zone_management": false, 00:12:09.508 "zone_append": false, 00:12:09.508 "compare": false, 00:12:09.508 "compare_and_write": false, 00:12:09.508 "abort": true, 00:12:09.508 "seek_hole": false, 00:12:09.508 "seek_data": false, 00:12:09.508 "copy": true, 00:12:09.508 "nvme_iov_md": false 00:12:09.508 }, 00:12:09.508 "memory_domains": [ 00:12:09.508 { 00:12:09.508 "dma_device_id": "system", 00:12:09.508 "dma_device_type": 1 00:12:09.508 }, 00:12:09.508 { 00:12:09.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.508 "dma_device_type": 2 00:12:09.508 } 00:12:09.508 ], 00:12:09.508 "driver_specific": {} 00:12:09.508 } 00:12:09.508 ] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.508 "name": "Existed_Raid", 00:12:09.508 "uuid": "24f316eb-fb48-4ccd-adde-e8a1dad29a5e", 00:12:09.508 "strip_size_kb": 64, 00:12:09.508 "state": "online", 00:12:09.508 "raid_level": "raid0", 00:12:09.508 "superblock": false, 00:12:09.508 "num_base_bdevs": 3, 00:12:09.508 "num_base_bdevs_discovered": 3, 00:12:09.508 "num_base_bdevs_operational": 3, 00:12:09.508 "base_bdevs_list": [ 00:12:09.508 { 00:12:09.508 "name": "NewBaseBdev", 00:12:09.508 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:09.508 "is_configured": true, 00:12:09.508 "data_offset": 0, 00:12:09.508 "data_size": 65536 00:12:09.508 }, 00:12:09.508 { 00:12:09.508 "name": "BaseBdev2", 00:12:09.508 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:09.508 "is_configured": true, 00:12:09.508 "data_offset": 0, 00:12:09.508 "data_size": 65536 00:12:09.508 }, 00:12:09.508 { 00:12:09.508 "name": "BaseBdev3", 00:12:09.508 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:09.508 "is_configured": true, 00:12:09.508 "data_offset": 0, 00:12:09.508 "data_size": 65536 00:12:09.508 } 00:12:09.508 ] 00:12:09.508 }' 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.508 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.073 [2024-11-20 11:24:17.741392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.073 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.073 "name": "Existed_Raid", 00:12:10.073 "aliases": [ 00:12:10.073 "24f316eb-fb48-4ccd-adde-e8a1dad29a5e" 00:12:10.073 ], 00:12:10.073 "product_name": "Raid Volume", 00:12:10.073 "block_size": 512, 00:12:10.073 "num_blocks": 196608, 00:12:10.073 "uuid": "24f316eb-fb48-4ccd-adde-e8a1dad29a5e", 00:12:10.073 "assigned_rate_limits": { 00:12:10.073 "rw_ios_per_sec": 0, 00:12:10.073 "rw_mbytes_per_sec": 0, 00:12:10.073 "r_mbytes_per_sec": 0, 00:12:10.073 "w_mbytes_per_sec": 0 00:12:10.073 }, 00:12:10.073 "claimed": false, 00:12:10.073 "zoned": false, 00:12:10.073 "supported_io_types": { 00:12:10.073 "read": true, 00:12:10.073 "write": true, 00:12:10.073 "unmap": true, 00:12:10.073 "flush": true, 00:12:10.073 "reset": true, 00:12:10.073 "nvme_admin": false, 00:12:10.073 "nvme_io": false, 00:12:10.073 "nvme_io_md": false, 00:12:10.073 "write_zeroes": true, 00:12:10.073 "zcopy": false, 00:12:10.073 "get_zone_info": false, 00:12:10.073 "zone_management": false, 00:12:10.073 "zone_append": false, 00:12:10.073 "compare": false, 00:12:10.073 "compare_and_write": false, 00:12:10.073 "abort": false, 00:12:10.073 "seek_hole": false, 00:12:10.073 "seek_data": false, 00:12:10.073 "copy": false, 00:12:10.073 "nvme_iov_md": false 00:12:10.073 }, 00:12:10.073 "memory_domains": [ 00:12:10.073 { 00:12:10.073 "dma_device_id": "system", 00:12:10.073 "dma_device_type": 1 00:12:10.073 }, 00:12:10.073 { 00:12:10.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.073 "dma_device_type": 2 00:12:10.073 }, 00:12:10.073 { 00:12:10.073 "dma_device_id": "system", 00:12:10.073 "dma_device_type": 1 00:12:10.073 }, 00:12:10.073 { 00:12:10.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.073 "dma_device_type": 2 00:12:10.073 }, 00:12:10.073 { 00:12:10.073 "dma_device_id": "system", 00:12:10.073 "dma_device_type": 1 00:12:10.073 }, 00:12:10.073 { 00:12:10.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.073 "dma_device_type": 2 00:12:10.073 } 00:12:10.073 ], 00:12:10.073 "driver_specific": { 00:12:10.073 "raid": { 00:12:10.073 "uuid": "24f316eb-fb48-4ccd-adde-e8a1dad29a5e", 00:12:10.073 "strip_size_kb": 64, 00:12:10.073 "state": "online", 00:12:10.073 "raid_level": "raid0", 00:12:10.073 "superblock": false, 00:12:10.073 "num_base_bdevs": 3, 00:12:10.073 "num_base_bdevs_discovered": 3, 00:12:10.073 "num_base_bdevs_operational": 3, 00:12:10.073 "base_bdevs_list": [ 00:12:10.073 { 00:12:10.073 "name": "NewBaseBdev", 00:12:10.073 "uuid": "b5458d89-5df8-4d82-96b9-dae961244e35", 00:12:10.073 "is_configured": true, 00:12:10.073 "data_offset": 0, 00:12:10.073 "data_size": 65536 00:12:10.073 }, 00:12:10.073 { 00:12:10.073 "name": "BaseBdev2", 00:12:10.073 "uuid": "dff8a451-8fbf-40e3-9c1a-e990b48e0b1c", 00:12:10.074 "is_configured": true, 00:12:10.074 "data_offset": 0, 00:12:10.074 "data_size": 65536 00:12:10.074 }, 00:12:10.074 { 00:12:10.074 "name": "BaseBdev3", 00:12:10.074 "uuid": "bc09d78b-9aad-4b56-ad3a-54384ddb0444", 00:12:10.074 "is_configured": true, 00:12:10.074 "data_offset": 0, 00:12:10.074 "data_size": 65536 00:12:10.074 } 00:12:10.074 ] 00:12:10.074 } 00:12:10.074 } 00:12:10.074 }' 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:10.074 BaseBdev2 00:12:10.074 BaseBdev3' 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.074 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.331 11:24:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.331 [2024-11-20 11:24:18.029078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.331 [2024-11-20 11:24:18.029111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.331 [2024-11-20 11:24:18.029208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.331 [2024-11-20 11:24:18.029280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.331 [2024-11-20 11:24:18.029311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63739 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63739 ']' 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63739 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63739 00:12:10.331 killing process with pid 63739 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63739' 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63739 00:12:10.331 [2024-11-20 11:24:18.060898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.331 11:24:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63739 00:12:10.592 [2024-11-20 11:24:18.328032] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.523 ************************************ 00:12:11.523 END TEST raid_state_function_test 00:12:11.523 ************************************ 00:12:11.523 11:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:11.523 00:12:11.523 real 0m11.695s 00:12:11.523 user 0m19.401s 00:12:11.523 sys 0m1.621s 00:12:11.523 11:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.523 11:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.781 11:24:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:11.781 11:24:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.781 11:24:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.781 11:24:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.781 ************************************ 00:12:11.781 START TEST raid_state_function_test_sb 00:12:11.781 ************************************ 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:11.781 Process raid pid: 64377 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64377 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64377' 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64377 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64377 ']' 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.781 11:24:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.781 [2024-11-20 11:24:19.522492] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:12:11.781 [2024-11-20 11:24:19.522659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.041 [2024-11-20 11:24:19.696448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.041 [2024-11-20 11:24:19.829939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.299 [2024-11-20 11:24:20.037053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.299 [2024-11-20 11:24:20.037098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.867 [2024-11-20 11:24:20.548940] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.867 [2024-11-20 11:24:20.549154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.867 [2024-11-20 11:24:20.549311] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.867 [2024-11-20 11:24:20.549376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.867 [2024-11-20 11:24:20.549510] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.867 [2024-11-20 11:24:20.549567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.867 "name": "Existed_Raid", 00:12:12.867 "uuid": "145b330f-ccfb-42e4-bccb-f8265d84dc41", 00:12:12.867 "strip_size_kb": 64, 00:12:12.867 "state": "configuring", 00:12:12.867 "raid_level": "raid0", 00:12:12.867 "superblock": true, 00:12:12.867 "num_base_bdevs": 3, 00:12:12.867 "num_base_bdevs_discovered": 0, 00:12:12.867 "num_base_bdevs_operational": 3, 00:12:12.867 "base_bdevs_list": [ 00:12:12.867 { 00:12:12.867 "name": "BaseBdev1", 00:12:12.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.867 "is_configured": false, 00:12:12.867 "data_offset": 0, 00:12:12.867 "data_size": 0 00:12:12.867 }, 00:12:12.867 { 00:12:12.867 "name": "BaseBdev2", 00:12:12.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.867 "is_configured": false, 00:12:12.867 "data_offset": 0, 00:12:12.867 "data_size": 0 00:12:12.867 }, 00:12:12.867 { 00:12:12.867 "name": "BaseBdev3", 00:12:12.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.867 "is_configured": false, 00:12:12.867 "data_offset": 0, 00:12:12.867 "data_size": 0 00:12:12.867 } 00:12:12.867 ] 00:12:12.867 }' 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.867 11:24:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.436 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.436 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.436 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.437 [2024-11-20 11:24:21.081109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.437 [2024-11-20 11:24:21.081157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.437 [2024-11-20 11:24:21.089007] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.437 [2024-11-20 11:24:21.089061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.437 [2024-11-20 11:24:21.089078] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.437 [2024-11-20 11:24:21.089095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.437 [2024-11-20 11:24:21.089105] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.437 [2024-11-20 11:24:21.089119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.437 [2024-11-20 11:24:21.135286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.437 BaseBdev1 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.437 [ 00:12:13.437 { 00:12:13.437 "name": "BaseBdev1", 00:12:13.437 "aliases": [ 00:12:13.437 "d227825b-b55a-4a84-a199-486356db100b" 00:12:13.437 ], 00:12:13.437 "product_name": "Malloc disk", 00:12:13.437 "block_size": 512, 00:12:13.437 "num_blocks": 65536, 00:12:13.437 "uuid": "d227825b-b55a-4a84-a199-486356db100b", 00:12:13.437 "assigned_rate_limits": { 00:12:13.437 "rw_ios_per_sec": 0, 00:12:13.437 "rw_mbytes_per_sec": 0, 00:12:13.437 "r_mbytes_per_sec": 0, 00:12:13.437 "w_mbytes_per_sec": 0 00:12:13.437 }, 00:12:13.437 "claimed": true, 00:12:13.437 "claim_type": "exclusive_write", 00:12:13.437 "zoned": false, 00:12:13.437 "supported_io_types": { 00:12:13.437 "read": true, 00:12:13.437 "write": true, 00:12:13.437 "unmap": true, 00:12:13.437 "flush": true, 00:12:13.437 "reset": true, 00:12:13.437 "nvme_admin": false, 00:12:13.437 "nvme_io": false, 00:12:13.437 "nvme_io_md": false, 00:12:13.437 "write_zeroes": true, 00:12:13.437 "zcopy": true, 00:12:13.437 "get_zone_info": false, 00:12:13.437 "zone_management": false, 00:12:13.437 "zone_append": false, 00:12:13.437 "compare": false, 00:12:13.437 "compare_and_write": false, 00:12:13.437 "abort": true, 00:12:13.437 "seek_hole": false, 00:12:13.437 "seek_data": false, 00:12:13.437 "copy": true, 00:12:13.437 "nvme_iov_md": false 00:12:13.437 }, 00:12:13.437 "memory_domains": [ 00:12:13.437 { 00:12:13.437 "dma_device_id": "system", 00:12:13.437 "dma_device_type": 1 00:12:13.437 }, 00:12:13.437 { 00:12:13.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.437 "dma_device_type": 2 00:12:13.437 } 00:12:13.437 ], 00:12:13.437 "driver_specific": {} 00:12:13.437 } 00:12:13.437 ] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.437 "name": "Existed_Raid", 00:12:13.437 "uuid": "6b278886-3759-4ab0-9ab1-4ce4d6344b87", 00:12:13.437 "strip_size_kb": 64, 00:12:13.437 "state": "configuring", 00:12:13.437 "raid_level": "raid0", 00:12:13.437 "superblock": true, 00:12:13.437 "num_base_bdevs": 3, 00:12:13.437 "num_base_bdevs_discovered": 1, 00:12:13.437 "num_base_bdevs_operational": 3, 00:12:13.437 "base_bdevs_list": [ 00:12:13.437 { 00:12:13.437 "name": "BaseBdev1", 00:12:13.437 "uuid": "d227825b-b55a-4a84-a199-486356db100b", 00:12:13.437 "is_configured": true, 00:12:13.437 "data_offset": 2048, 00:12:13.437 "data_size": 63488 00:12:13.437 }, 00:12:13.437 { 00:12:13.437 "name": "BaseBdev2", 00:12:13.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.437 "is_configured": false, 00:12:13.437 "data_offset": 0, 00:12:13.437 "data_size": 0 00:12:13.437 }, 00:12:13.437 { 00:12:13.437 "name": "BaseBdev3", 00:12:13.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.437 "is_configured": false, 00:12:13.437 "data_offset": 0, 00:12:13.437 "data_size": 0 00:12:13.437 } 00:12:13.437 ] 00:12:13.437 }' 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.437 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.006 [2024-11-20 11:24:21.691528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.006 [2024-11-20 11:24:21.691590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.006 [2024-11-20 11:24:21.699552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.006 [2024-11-20 11:24:21.702310] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.006 [2024-11-20 11:24:21.702378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.006 [2024-11-20 11:24:21.702395] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.006 [2024-11-20 11:24:21.702411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.006 "name": "Existed_Raid", 00:12:14.006 "uuid": "d2601040-fdb5-4c49-b6e7-0dd593248c4d", 00:12:14.006 "strip_size_kb": 64, 00:12:14.006 "state": "configuring", 00:12:14.006 "raid_level": "raid0", 00:12:14.006 "superblock": true, 00:12:14.006 "num_base_bdevs": 3, 00:12:14.006 "num_base_bdevs_discovered": 1, 00:12:14.006 "num_base_bdevs_operational": 3, 00:12:14.006 "base_bdevs_list": [ 00:12:14.006 { 00:12:14.006 "name": "BaseBdev1", 00:12:14.006 "uuid": "d227825b-b55a-4a84-a199-486356db100b", 00:12:14.006 "is_configured": true, 00:12:14.006 "data_offset": 2048, 00:12:14.006 "data_size": 63488 00:12:14.006 }, 00:12:14.006 { 00:12:14.006 "name": "BaseBdev2", 00:12:14.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.006 "is_configured": false, 00:12:14.006 "data_offset": 0, 00:12:14.006 "data_size": 0 00:12:14.006 }, 00:12:14.006 { 00:12:14.006 "name": "BaseBdev3", 00:12:14.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.006 "is_configured": false, 00:12:14.006 "data_offset": 0, 00:12:14.006 "data_size": 0 00:12:14.006 } 00:12:14.006 ] 00:12:14.006 }' 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.006 11:24:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.574 [2024-11-20 11:24:22.258140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.574 BaseBdev2 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.574 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.574 [ 00:12:14.574 { 00:12:14.574 "name": "BaseBdev2", 00:12:14.574 "aliases": [ 00:12:14.574 "2b5cb073-12b7-4de4-9959-398024d9bd64" 00:12:14.574 ], 00:12:14.574 "product_name": "Malloc disk", 00:12:14.574 "block_size": 512, 00:12:14.574 "num_blocks": 65536, 00:12:14.575 "uuid": "2b5cb073-12b7-4de4-9959-398024d9bd64", 00:12:14.575 "assigned_rate_limits": { 00:12:14.575 "rw_ios_per_sec": 0, 00:12:14.575 "rw_mbytes_per_sec": 0, 00:12:14.575 "r_mbytes_per_sec": 0, 00:12:14.575 "w_mbytes_per_sec": 0 00:12:14.575 }, 00:12:14.575 "claimed": true, 00:12:14.575 "claim_type": "exclusive_write", 00:12:14.575 "zoned": false, 00:12:14.575 "supported_io_types": { 00:12:14.575 "read": true, 00:12:14.575 "write": true, 00:12:14.575 "unmap": true, 00:12:14.575 "flush": true, 00:12:14.575 "reset": true, 00:12:14.575 "nvme_admin": false, 00:12:14.575 "nvme_io": false, 00:12:14.575 "nvme_io_md": false, 00:12:14.575 "write_zeroes": true, 00:12:14.575 "zcopy": true, 00:12:14.575 "get_zone_info": false, 00:12:14.575 "zone_management": false, 00:12:14.575 "zone_append": false, 00:12:14.575 "compare": false, 00:12:14.575 "compare_and_write": false, 00:12:14.575 "abort": true, 00:12:14.575 "seek_hole": false, 00:12:14.575 "seek_data": false, 00:12:14.575 "copy": true, 00:12:14.575 "nvme_iov_md": false 00:12:14.575 }, 00:12:14.575 "memory_domains": [ 00:12:14.575 { 00:12:14.575 "dma_device_id": "system", 00:12:14.575 "dma_device_type": 1 00:12:14.575 }, 00:12:14.575 { 00:12:14.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.575 "dma_device_type": 2 00:12:14.575 } 00:12:14.575 ], 00:12:14.575 "driver_specific": {} 00:12:14.575 } 00:12:14.575 ] 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.575 "name": "Existed_Raid", 00:12:14.575 "uuid": "d2601040-fdb5-4c49-b6e7-0dd593248c4d", 00:12:14.575 "strip_size_kb": 64, 00:12:14.575 "state": "configuring", 00:12:14.575 "raid_level": "raid0", 00:12:14.575 "superblock": true, 00:12:14.575 "num_base_bdevs": 3, 00:12:14.575 "num_base_bdevs_discovered": 2, 00:12:14.575 "num_base_bdevs_operational": 3, 00:12:14.575 "base_bdevs_list": [ 00:12:14.575 { 00:12:14.575 "name": "BaseBdev1", 00:12:14.575 "uuid": "d227825b-b55a-4a84-a199-486356db100b", 00:12:14.575 "is_configured": true, 00:12:14.575 "data_offset": 2048, 00:12:14.575 "data_size": 63488 00:12:14.575 }, 00:12:14.575 { 00:12:14.575 "name": "BaseBdev2", 00:12:14.575 "uuid": "2b5cb073-12b7-4de4-9959-398024d9bd64", 00:12:14.575 "is_configured": true, 00:12:14.575 "data_offset": 2048, 00:12:14.575 "data_size": 63488 00:12:14.575 }, 00:12:14.575 { 00:12:14.575 "name": "BaseBdev3", 00:12:14.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.575 "is_configured": false, 00:12:14.575 "data_offset": 0, 00:12:14.575 "data_size": 0 00:12:14.575 } 00:12:14.575 ] 00:12:14.575 }' 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.575 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.142 [2024-11-20 11:24:22.900722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.142 [2024-11-20 11:24:22.901029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:15.142 [2024-11-20 11:24:22.901061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:15.142 BaseBdev3 00:12:15.142 [2024-11-20 11:24:22.901388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:15.142 [2024-11-20 11:24:22.901755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:15.142 [2024-11-20 11:24:22.901772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:15.142 [2024-11-20 11:24:22.901951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.142 [ 00:12:15.142 { 00:12:15.142 "name": "BaseBdev3", 00:12:15.142 "aliases": [ 00:12:15.142 "aed49a85-7c40-4a90-b2b1-7e8332c48a71" 00:12:15.142 ], 00:12:15.142 "product_name": "Malloc disk", 00:12:15.142 "block_size": 512, 00:12:15.142 "num_blocks": 65536, 00:12:15.142 "uuid": "aed49a85-7c40-4a90-b2b1-7e8332c48a71", 00:12:15.142 "assigned_rate_limits": { 00:12:15.142 "rw_ios_per_sec": 0, 00:12:15.142 "rw_mbytes_per_sec": 0, 00:12:15.142 "r_mbytes_per_sec": 0, 00:12:15.142 "w_mbytes_per_sec": 0 00:12:15.142 }, 00:12:15.142 "claimed": true, 00:12:15.142 "claim_type": "exclusive_write", 00:12:15.142 "zoned": false, 00:12:15.142 "supported_io_types": { 00:12:15.142 "read": true, 00:12:15.142 "write": true, 00:12:15.142 "unmap": true, 00:12:15.142 "flush": true, 00:12:15.142 "reset": true, 00:12:15.142 "nvme_admin": false, 00:12:15.142 "nvme_io": false, 00:12:15.142 "nvme_io_md": false, 00:12:15.142 "write_zeroes": true, 00:12:15.142 "zcopy": true, 00:12:15.142 "get_zone_info": false, 00:12:15.142 "zone_management": false, 00:12:15.142 "zone_append": false, 00:12:15.142 "compare": false, 00:12:15.142 "compare_and_write": false, 00:12:15.142 "abort": true, 00:12:15.142 "seek_hole": false, 00:12:15.142 "seek_data": false, 00:12:15.142 "copy": true, 00:12:15.142 "nvme_iov_md": false 00:12:15.142 }, 00:12:15.142 "memory_domains": [ 00:12:15.142 { 00:12:15.142 "dma_device_id": "system", 00:12:15.142 "dma_device_type": 1 00:12:15.142 }, 00:12:15.142 { 00:12:15.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.142 "dma_device_type": 2 00:12:15.142 } 00:12:15.142 ], 00:12:15.142 "driver_specific": {} 00:12:15.142 } 00:12:15.142 ] 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:15.142 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.143 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.402 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.402 "name": "Existed_Raid", 00:12:15.402 "uuid": "d2601040-fdb5-4c49-b6e7-0dd593248c4d", 00:12:15.402 "strip_size_kb": 64, 00:12:15.402 "state": "online", 00:12:15.402 "raid_level": "raid0", 00:12:15.402 "superblock": true, 00:12:15.402 "num_base_bdevs": 3, 00:12:15.402 "num_base_bdevs_discovered": 3, 00:12:15.402 "num_base_bdevs_operational": 3, 00:12:15.402 "base_bdevs_list": [ 00:12:15.402 { 00:12:15.402 "name": "BaseBdev1", 00:12:15.402 "uuid": "d227825b-b55a-4a84-a199-486356db100b", 00:12:15.402 "is_configured": true, 00:12:15.402 "data_offset": 2048, 00:12:15.402 "data_size": 63488 00:12:15.402 }, 00:12:15.402 { 00:12:15.402 "name": "BaseBdev2", 00:12:15.402 "uuid": "2b5cb073-12b7-4de4-9959-398024d9bd64", 00:12:15.402 "is_configured": true, 00:12:15.402 "data_offset": 2048, 00:12:15.402 "data_size": 63488 00:12:15.402 }, 00:12:15.402 { 00:12:15.402 "name": "BaseBdev3", 00:12:15.402 "uuid": "aed49a85-7c40-4a90-b2b1-7e8332c48a71", 00:12:15.402 "is_configured": true, 00:12:15.402 "data_offset": 2048, 00:12:15.402 "data_size": 63488 00:12:15.402 } 00:12:15.402 ] 00:12:15.402 }' 00:12:15.402 11:24:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.402 11:24:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.660 [2024-11-20 11:24:23.461313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.660 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.922 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.922 "name": "Existed_Raid", 00:12:15.922 "aliases": [ 00:12:15.922 "d2601040-fdb5-4c49-b6e7-0dd593248c4d" 00:12:15.922 ], 00:12:15.922 "product_name": "Raid Volume", 00:12:15.922 "block_size": 512, 00:12:15.922 "num_blocks": 190464, 00:12:15.922 "uuid": "d2601040-fdb5-4c49-b6e7-0dd593248c4d", 00:12:15.922 "assigned_rate_limits": { 00:12:15.922 "rw_ios_per_sec": 0, 00:12:15.922 "rw_mbytes_per_sec": 0, 00:12:15.922 "r_mbytes_per_sec": 0, 00:12:15.922 "w_mbytes_per_sec": 0 00:12:15.922 }, 00:12:15.922 "claimed": false, 00:12:15.922 "zoned": false, 00:12:15.922 "supported_io_types": { 00:12:15.922 "read": true, 00:12:15.922 "write": true, 00:12:15.922 "unmap": true, 00:12:15.922 "flush": true, 00:12:15.922 "reset": true, 00:12:15.922 "nvme_admin": false, 00:12:15.922 "nvme_io": false, 00:12:15.922 "nvme_io_md": false, 00:12:15.922 "write_zeroes": true, 00:12:15.922 "zcopy": false, 00:12:15.922 "get_zone_info": false, 00:12:15.922 "zone_management": false, 00:12:15.922 "zone_append": false, 00:12:15.922 "compare": false, 00:12:15.922 "compare_and_write": false, 00:12:15.922 "abort": false, 00:12:15.922 "seek_hole": false, 00:12:15.922 "seek_data": false, 00:12:15.922 "copy": false, 00:12:15.922 "nvme_iov_md": false 00:12:15.922 }, 00:12:15.922 "memory_domains": [ 00:12:15.922 { 00:12:15.922 "dma_device_id": "system", 00:12:15.922 "dma_device_type": 1 00:12:15.922 }, 00:12:15.922 { 00:12:15.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.922 "dma_device_type": 2 00:12:15.922 }, 00:12:15.922 { 00:12:15.922 "dma_device_id": "system", 00:12:15.922 "dma_device_type": 1 00:12:15.922 }, 00:12:15.922 { 00:12:15.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.922 "dma_device_type": 2 00:12:15.922 }, 00:12:15.922 { 00:12:15.922 "dma_device_id": "system", 00:12:15.922 "dma_device_type": 1 00:12:15.922 }, 00:12:15.922 { 00:12:15.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.922 "dma_device_type": 2 00:12:15.922 } 00:12:15.922 ], 00:12:15.922 "driver_specific": { 00:12:15.922 "raid": { 00:12:15.922 "uuid": "d2601040-fdb5-4c49-b6e7-0dd593248c4d", 00:12:15.922 "strip_size_kb": 64, 00:12:15.922 "state": "online", 00:12:15.922 "raid_level": "raid0", 00:12:15.922 "superblock": true, 00:12:15.922 "num_base_bdevs": 3, 00:12:15.923 "num_base_bdevs_discovered": 3, 00:12:15.923 "num_base_bdevs_operational": 3, 00:12:15.923 "base_bdevs_list": [ 00:12:15.923 { 00:12:15.923 "name": "BaseBdev1", 00:12:15.923 "uuid": "d227825b-b55a-4a84-a199-486356db100b", 00:12:15.923 "is_configured": true, 00:12:15.923 "data_offset": 2048, 00:12:15.923 "data_size": 63488 00:12:15.923 }, 00:12:15.923 { 00:12:15.923 "name": "BaseBdev2", 00:12:15.923 "uuid": "2b5cb073-12b7-4de4-9959-398024d9bd64", 00:12:15.923 "is_configured": true, 00:12:15.923 "data_offset": 2048, 00:12:15.923 "data_size": 63488 00:12:15.923 }, 00:12:15.923 { 00:12:15.923 "name": "BaseBdev3", 00:12:15.923 "uuid": "aed49a85-7c40-4a90-b2b1-7e8332c48a71", 00:12:15.923 "is_configured": true, 00:12:15.923 "data_offset": 2048, 00:12:15.923 "data_size": 63488 00:12:15.923 } 00:12:15.923 ] 00:12:15.923 } 00:12:15.923 } 00:12:15.923 }' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:15.923 BaseBdev2 00:12:15.923 BaseBdev3' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:15.923 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.924 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.924 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.924 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 [2024-11-20 11:24:23.785120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.183 [2024-11-20 11:24:23.785275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.183 [2024-11-20 11:24:23.785374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.183 "name": "Existed_Raid", 00:12:16.183 "uuid": "d2601040-fdb5-4c49-b6e7-0dd593248c4d", 00:12:16.183 "strip_size_kb": 64, 00:12:16.183 "state": "offline", 00:12:16.183 "raid_level": "raid0", 00:12:16.183 "superblock": true, 00:12:16.183 "num_base_bdevs": 3, 00:12:16.183 "num_base_bdevs_discovered": 2, 00:12:16.183 "num_base_bdevs_operational": 2, 00:12:16.183 "base_bdevs_list": [ 00:12:16.183 { 00:12:16.183 "name": null, 00:12:16.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.183 "is_configured": false, 00:12:16.183 "data_offset": 0, 00:12:16.183 "data_size": 63488 00:12:16.183 }, 00:12:16.183 { 00:12:16.183 "name": "BaseBdev2", 00:12:16.183 "uuid": "2b5cb073-12b7-4de4-9959-398024d9bd64", 00:12:16.183 "is_configured": true, 00:12:16.183 "data_offset": 2048, 00:12:16.183 "data_size": 63488 00:12:16.183 }, 00:12:16.183 { 00:12:16.183 "name": "BaseBdev3", 00:12:16.183 "uuid": "aed49a85-7c40-4a90-b2b1-7e8332c48a71", 00:12:16.183 "is_configured": true, 00:12:16.183 "data_offset": 2048, 00:12:16.183 "data_size": 63488 00:12:16.183 } 00:12:16.183 ] 00:12:16.183 }' 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.183 11:24:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.825 [2024-11-20 11:24:24.453440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.825 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.825 [2024-11-20 11:24:24.587588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:16.825 [2024-11-20 11:24:24.587791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:17.084 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.084 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.084 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 BaseBdev2 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 [ 00:12:17.085 { 00:12:17.085 "name": "BaseBdev2", 00:12:17.085 "aliases": [ 00:12:17.085 "8634a37d-4d73-4be3-b1d9-40b7e9a716f7" 00:12:17.085 ], 00:12:17.085 "product_name": "Malloc disk", 00:12:17.085 "block_size": 512, 00:12:17.085 "num_blocks": 65536, 00:12:17.085 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:17.085 "assigned_rate_limits": { 00:12:17.085 "rw_ios_per_sec": 0, 00:12:17.085 "rw_mbytes_per_sec": 0, 00:12:17.085 "r_mbytes_per_sec": 0, 00:12:17.085 "w_mbytes_per_sec": 0 00:12:17.085 }, 00:12:17.085 "claimed": false, 00:12:17.085 "zoned": false, 00:12:17.085 "supported_io_types": { 00:12:17.085 "read": true, 00:12:17.085 "write": true, 00:12:17.085 "unmap": true, 00:12:17.085 "flush": true, 00:12:17.085 "reset": true, 00:12:17.085 "nvme_admin": false, 00:12:17.085 "nvme_io": false, 00:12:17.085 "nvme_io_md": false, 00:12:17.085 "write_zeroes": true, 00:12:17.085 "zcopy": true, 00:12:17.085 "get_zone_info": false, 00:12:17.085 "zone_management": false, 00:12:17.085 "zone_append": false, 00:12:17.085 "compare": false, 00:12:17.085 "compare_and_write": false, 00:12:17.085 "abort": true, 00:12:17.085 "seek_hole": false, 00:12:17.085 "seek_data": false, 00:12:17.085 "copy": true, 00:12:17.085 "nvme_iov_md": false 00:12:17.085 }, 00:12:17.085 "memory_domains": [ 00:12:17.085 { 00:12:17.085 "dma_device_id": "system", 00:12:17.085 "dma_device_type": 1 00:12:17.085 }, 00:12:17.085 { 00:12:17.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.085 "dma_device_type": 2 00:12:17.085 } 00:12:17.085 ], 00:12:17.085 "driver_specific": {} 00:12:17.085 } 00:12:17.085 ] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 BaseBdev3 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 [ 00:12:17.085 { 00:12:17.085 "name": "BaseBdev3", 00:12:17.085 "aliases": [ 00:12:17.085 "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb" 00:12:17.085 ], 00:12:17.085 "product_name": "Malloc disk", 00:12:17.085 "block_size": 512, 00:12:17.085 "num_blocks": 65536, 00:12:17.085 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:17.085 "assigned_rate_limits": { 00:12:17.085 "rw_ios_per_sec": 0, 00:12:17.085 "rw_mbytes_per_sec": 0, 00:12:17.085 "r_mbytes_per_sec": 0, 00:12:17.085 "w_mbytes_per_sec": 0 00:12:17.085 }, 00:12:17.085 "claimed": false, 00:12:17.085 "zoned": false, 00:12:17.085 "supported_io_types": { 00:12:17.085 "read": true, 00:12:17.085 "write": true, 00:12:17.085 "unmap": true, 00:12:17.085 "flush": true, 00:12:17.085 "reset": true, 00:12:17.085 "nvme_admin": false, 00:12:17.085 "nvme_io": false, 00:12:17.085 "nvme_io_md": false, 00:12:17.085 "write_zeroes": true, 00:12:17.085 "zcopy": true, 00:12:17.085 "get_zone_info": false, 00:12:17.085 "zone_management": false, 00:12:17.085 "zone_append": false, 00:12:17.085 "compare": false, 00:12:17.085 "compare_and_write": false, 00:12:17.085 "abort": true, 00:12:17.085 "seek_hole": false, 00:12:17.085 "seek_data": false, 00:12:17.085 "copy": true, 00:12:17.085 "nvme_iov_md": false 00:12:17.085 }, 00:12:17.085 "memory_domains": [ 00:12:17.085 { 00:12:17.085 "dma_device_id": "system", 00:12:17.085 "dma_device_type": 1 00:12:17.085 }, 00:12:17.085 { 00:12:17.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.085 "dma_device_type": 2 00:12:17.085 } 00:12:17.085 ], 00:12:17.085 "driver_specific": {} 00:12:17.085 } 00:12:17.085 ] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.085 [2024-11-20 11:24:24.889198] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.085 [2024-11-20 11:24:24.889254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.085 [2024-11-20 11:24:24.889288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.085 [2024-11-20 11:24:24.891771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.085 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.086 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.345 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.345 "name": "Existed_Raid", 00:12:17.345 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:17.345 "strip_size_kb": 64, 00:12:17.345 "state": "configuring", 00:12:17.345 "raid_level": "raid0", 00:12:17.345 "superblock": true, 00:12:17.345 "num_base_bdevs": 3, 00:12:17.345 "num_base_bdevs_discovered": 2, 00:12:17.345 "num_base_bdevs_operational": 3, 00:12:17.345 "base_bdevs_list": [ 00:12:17.345 { 00:12:17.345 "name": "BaseBdev1", 00:12:17.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.345 "is_configured": false, 00:12:17.345 "data_offset": 0, 00:12:17.345 "data_size": 0 00:12:17.345 }, 00:12:17.345 { 00:12:17.345 "name": "BaseBdev2", 00:12:17.345 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:17.345 "is_configured": true, 00:12:17.345 "data_offset": 2048, 00:12:17.345 "data_size": 63488 00:12:17.345 }, 00:12:17.345 { 00:12:17.345 "name": "BaseBdev3", 00:12:17.345 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:17.345 "is_configured": true, 00:12:17.345 "data_offset": 2048, 00:12:17.345 "data_size": 63488 00:12:17.345 } 00:12:17.345 ] 00:12:17.345 }' 00:12:17.345 11:24:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.345 11:24:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.604 [2024-11-20 11:24:25.417312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.604 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.863 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.863 "name": "Existed_Raid", 00:12:17.863 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:17.863 "strip_size_kb": 64, 00:12:17.863 "state": "configuring", 00:12:17.863 "raid_level": "raid0", 00:12:17.863 "superblock": true, 00:12:17.863 "num_base_bdevs": 3, 00:12:17.863 "num_base_bdevs_discovered": 1, 00:12:17.863 "num_base_bdevs_operational": 3, 00:12:17.863 "base_bdevs_list": [ 00:12:17.863 { 00:12:17.863 "name": "BaseBdev1", 00:12:17.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.863 "is_configured": false, 00:12:17.863 "data_offset": 0, 00:12:17.863 "data_size": 0 00:12:17.863 }, 00:12:17.863 { 00:12:17.863 "name": null, 00:12:17.863 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:17.863 "is_configured": false, 00:12:17.863 "data_offset": 0, 00:12:17.863 "data_size": 63488 00:12:17.863 }, 00:12:17.863 { 00:12:17.863 "name": "BaseBdev3", 00:12:17.863 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:17.863 "is_configured": true, 00:12:17.863 "data_offset": 2048, 00:12:17.863 "data_size": 63488 00:12:17.863 } 00:12:17.863 ] 00:12:17.863 }' 00:12:17.863 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.863 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.122 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:18.122 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.122 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.122 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.122 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.380 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:18.380 11:24:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:18.380 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.380 11:24:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.380 [2024-11-20 11:24:26.011870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.380 BaseBdev1 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.380 [ 00:12:18.380 { 00:12:18.380 "name": "BaseBdev1", 00:12:18.380 "aliases": [ 00:12:18.380 "50df253d-b9a8-42e7-8885-207b6398b3a2" 00:12:18.380 ], 00:12:18.380 "product_name": "Malloc disk", 00:12:18.380 "block_size": 512, 00:12:18.380 "num_blocks": 65536, 00:12:18.380 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:18.380 "assigned_rate_limits": { 00:12:18.380 "rw_ios_per_sec": 0, 00:12:18.380 "rw_mbytes_per_sec": 0, 00:12:18.380 "r_mbytes_per_sec": 0, 00:12:18.380 "w_mbytes_per_sec": 0 00:12:18.380 }, 00:12:18.380 "claimed": true, 00:12:18.380 "claim_type": "exclusive_write", 00:12:18.380 "zoned": false, 00:12:18.380 "supported_io_types": { 00:12:18.380 "read": true, 00:12:18.380 "write": true, 00:12:18.380 "unmap": true, 00:12:18.380 "flush": true, 00:12:18.380 "reset": true, 00:12:18.380 "nvme_admin": false, 00:12:18.380 "nvme_io": false, 00:12:18.380 "nvme_io_md": false, 00:12:18.380 "write_zeroes": true, 00:12:18.380 "zcopy": true, 00:12:18.380 "get_zone_info": false, 00:12:18.380 "zone_management": false, 00:12:18.380 "zone_append": false, 00:12:18.380 "compare": false, 00:12:18.380 "compare_and_write": false, 00:12:18.380 "abort": true, 00:12:18.380 "seek_hole": false, 00:12:18.380 "seek_data": false, 00:12:18.380 "copy": true, 00:12:18.380 "nvme_iov_md": false 00:12:18.380 }, 00:12:18.380 "memory_domains": [ 00:12:18.380 { 00:12:18.380 "dma_device_id": "system", 00:12:18.380 "dma_device_type": 1 00:12:18.380 }, 00:12:18.380 { 00:12:18.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.380 "dma_device_type": 2 00:12:18.380 } 00:12:18.380 ], 00:12:18.380 "driver_specific": {} 00:12:18.380 } 00:12:18.380 ] 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.380 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.381 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.381 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.381 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.381 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.381 "name": "Existed_Raid", 00:12:18.381 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:18.381 "strip_size_kb": 64, 00:12:18.381 "state": "configuring", 00:12:18.381 "raid_level": "raid0", 00:12:18.381 "superblock": true, 00:12:18.381 "num_base_bdevs": 3, 00:12:18.381 "num_base_bdevs_discovered": 2, 00:12:18.381 "num_base_bdevs_operational": 3, 00:12:18.381 "base_bdevs_list": [ 00:12:18.381 { 00:12:18.381 "name": "BaseBdev1", 00:12:18.381 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:18.381 "is_configured": true, 00:12:18.381 "data_offset": 2048, 00:12:18.381 "data_size": 63488 00:12:18.381 }, 00:12:18.381 { 00:12:18.381 "name": null, 00:12:18.381 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:18.381 "is_configured": false, 00:12:18.381 "data_offset": 0, 00:12:18.381 "data_size": 63488 00:12:18.381 }, 00:12:18.381 { 00:12:18.381 "name": "BaseBdev3", 00:12:18.381 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:18.381 "is_configured": true, 00:12:18.381 "data_offset": 2048, 00:12:18.381 "data_size": 63488 00:12:18.381 } 00:12:18.381 ] 00:12:18.381 }' 00:12:18.381 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.381 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.948 [2024-11-20 11:24:26.616089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.948 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.949 "name": "Existed_Raid", 00:12:18.949 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:18.949 "strip_size_kb": 64, 00:12:18.949 "state": "configuring", 00:12:18.949 "raid_level": "raid0", 00:12:18.949 "superblock": true, 00:12:18.949 "num_base_bdevs": 3, 00:12:18.949 "num_base_bdevs_discovered": 1, 00:12:18.949 "num_base_bdevs_operational": 3, 00:12:18.949 "base_bdevs_list": [ 00:12:18.949 { 00:12:18.949 "name": "BaseBdev1", 00:12:18.949 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:18.949 "is_configured": true, 00:12:18.949 "data_offset": 2048, 00:12:18.949 "data_size": 63488 00:12:18.949 }, 00:12:18.949 { 00:12:18.949 "name": null, 00:12:18.949 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:18.949 "is_configured": false, 00:12:18.949 "data_offset": 0, 00:12:18.949 "data_size": 63488 00:12:18.949 }, 00:12:18.949 { 00:12:18.949 "name": null, 00:12:18.949 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:18.949 "is_configured": false, 00:12:18.949 "data_offset": 0, 00:12:18.949 "data_size": 63488 00:12:18.949 } 00:12:18.949 ] 00:12:18.949 }' 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.949 11:24:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.516 [2024-11-20 11:24:27.180266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.516 "name": "Existed_Raid", 00:12:19.516 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:19.516 "strip_size_kb": 64, 00:12:19.516 "state": "configuring", 00:12:19.516 "raid_level": "raid0", 00:12:19.516 "superblock": true, 00:12:19.516 "num_base_bdevs": 3, 00:12:19.516 "num_base_bdevs_discovered": 2, 00:12:19.516 "num_base_bdevs_operational": 3, 00:12:19.516 "base_bdevs_list": [ 00:12:19.516 { 00:12:19.516 "name": "BaseBdev1", 00:12:19.516 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:19.516 "is_configured": true, 00:12:19.516 "data_offset": 2048, 00:12:19.516 "data_size": 63488 00:12:19.516 }, 00:12:19.516 { 00:12:19.516 "name": null, 00:12:19.516 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:19.516 "is_configured": false, 00:12:19.516 "data_offset": 0, 00:12:19.516 "data_size": 63488 00:12:19.516 }, 00:12:19.516 { 00:12:19.516 "name": "BaseBdev3", 00:12:19.516 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:19.516 "is_configured": true, 00:12:19.516 "data_offset": 2048, 00:12:19.516 "data_size": 63488 00:12:19.516 } 00:12:19.516 ] 00:12:19.516 }' 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.516 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.084 [2024-11-20 11:24:27.784475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.084 "name": "Existed_Raid", 00:12:20.084 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:20.084 "strip_size_kb": 64, 00:12:20.084 "state": "configuring", 00:12:20.084 "raid_level": "raid0", 00:12:20.084 "superblock": true, 00:12:20.084 "num_base_bdevs": 3, 00:12:20.084 "num_base_bdevs_discovered": 1, 00:12:20.084 "num_base_bdevs_operational": 3, 00:12:20.084 "base_bdevs_list": [ 00:12:20.084 { 00:12:20.084 "name": null, 00:12:20.084 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:20.084 "is_configured": false, 00:12:20.084 "data_offset": 0, 00:12:20.084 "data_size": 63488 00:12:20.084 }, 00:12:20.084 { 00:12:20.084 "name": null, 00:12:20.084 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:20.084 "is_configured": false, 00:12:20.084 "data_offset": 0, 00:12:20.084 "data_size": 63488 00:12:20.084 }, 00:12:20.084 { 00:12:20.084 "name": "BaseBdev3", 00:12:20.084 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:20.084 "is_configured": true, 00:12:20.084 "data_offset": 2048, 00:12:20.084 "data_size": 63488 00:12:20.084 } 00:12:20.084 ] 00:12:20.084 }' 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.084 11:24:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.651 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.651 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.651 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.652 [2024-11-20 11:24:28.461319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.652 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.910 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.910 "name": "Existed_Raid", 00:12:20.910 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:20.910 "strip_size_kb": 64, 00:12:20.910 "state": "configuring", 00:12:20.910 "raid_level": "raid0", 00:12:20.910 "superblock": true, 00:12:20.910 "num_base_bdevs": 3, 00:12:20.910 "num_base_bdevs_discovered": 2, 00:12:20.910 "num_base_bdevs_operational": 3, 00:12:20.910 "base_bdevs_list": [ 00:12:20.910 { 00:12:20.910 "name": null, 00:12:20.910 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:20.910 "is_configured": false, 00:12:20.910 "data_offset": 0, 00:12:20.910 "data_size": 63488 00:12:20.910 }, 00:12:20.910 { 00:12:20.910 "name": "BaseBdev2", 00:12:20.910 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:20.910 "is_configured": true, 00:12:20.910 "data_offset": 2048, 00:12:20.910 "data_size": 63488 00:12:20.910 }, 00:12:20.910 { 00:12:20.910 "name": "BaseBdev3", 00:12:20.910 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:20.910 "is_configured": true, 00:12:20.910 "data_offset": 2048, 00:12:20.910 "data_size": 63488 00:12:20.910 } 00:12:20.910 ] 00:12:20.910 }' 00:12:20.910 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.910 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.169 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.169 11:24:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.169 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.169 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.169 11:24:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 50df253d-b9a8-42e7-8885-207b6398b3a2 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.428 [2024-11-20 11:24:29.115641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:21.428 [2024-11-20 11:24:29.116143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.428 [2024-11-20 11:24:29.116175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:21.428 NewBaseBdev 00:12:21.428 [2024-11-20 11:24:29.116515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:21.428 [2024-11-20 11:24:29.116714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.428 [2024-11-20 11:24:29.116731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:21.428 [2024-11-20 11:24:29.116912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.428 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.428 [ 00:12:21.428 { 00:12:21.429 "name": "NewBaseBdev", 00:12:21.429 "aliases": [ 00:12:21.429 "50df253d-b9a8-42e7-8885-207b6398b3a2" 00:12:21.429 ], 00:12:21.429 "product_name": "Malloc disk", 00:12:21.429 "block_size": 512, 00:12:21.429 "num_blocks": 65536, 00:12:21.429 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:21.429 "assigned_rate_limits": { 00:12:21.429 "rw_ios_per_sec": 0, 00:12:21.429 "rw_mbytes_per_sec": 0, 00:12:21.429 "r_mbytes_per_sec": 0, 00:12:21.429 "w_mbytes_per_sec": 0 00:12:21.429 }, 00:12:21.429 "claimed": true, 00:12:21.429 "claim_type": "exclusive_write", 00:12:21.429 "zoned": false, 00:12:21.429 "supported_io_types": { 00:12:21.429 "read": true, 00:12:21.429 "write": true, 00:12:21.429 "unmap": true, 00:12:21.429 "flush": true, 00:12:21.429 "reset": true, 00:12:21.429 "nvme_admin": false, 00:12:21.429 "nvme_io": false, 00:12:21.429 "nvme_io_md": false, 00:12:21.429 "write_zeroes": true, 00:12:21.429 "zcopy": true, 00:12:21.429 "get_zone_info": false, 00:12:21.429 "zone_management": false, 00:12:21.429 "zone_append": false, 00:12:21.429 "compare": false, 00:12:21.429 "compare_and_write": false, 00:12:21.429 "abort": true, 00:12:21.429 "seek_hole": false, 00:12:21.429 "seek_data": false, 00:12:21.429 "copy": true, 00:12:21.429 "nvme_iov_md": false 00:12:21.429 }, 00:12:21.429 "memory_domains": [ 00:12:21.429 { 00:12:21.429 "dma_device_id": "system", 00:12:21.429 "dma_device_type": 1 00:12:21.429 }, 00:12:21.429 { 00:12:21.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.429 "dma_device_type": 2 00:12:21.429 } 00:12:21.429 ], 00:12:21.429 "driver_specific": {} 00:12:21.429 } 00:12:21.429 ] 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.429 "name": "Existed_Raid", 00:12:21.429 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:21.429 "strip_size_kb": 64, 00:12:21.429 "state": "online", 00:12:21.429 "raid_level": "raid0", 00:12:21.429 "superblock": true, 00:12:21.429 "num_base_bdevs": 3, 00:12:21.429 "num_base_bdevs_discovered": 3, 00:12:21.429 "num_base_bdevs_operational": 3, 00:12:21.429 "base_bdevs_list": [ 00:12:21.429 { 00:12:21.429 "name": "NewBaseBdev", 00:12:21.429 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:21.429 "is_configured": true, 00:12:21.429 "data_offset": 2048, 00:12:21.429 "data_size": 63488 00:12:21.429 }, 00:12:21.429 { 00:12:21.429 "name": "BaseBdev2", 00:12:21.429 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:21.429 "is_configured": true, 00:12:21.429 "data_offset": 2048, 00:12:21.429 "data_size": 63488 00:12:21.429 }, 00:12:21.429 { 00:12:21.429 "name": "BaseBdev3", 00:12:21.429 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:21.429 "is_configured": true, 00:12:21.429 "data_offset": 2048, 00:12:21.429 "data_size": 63488 00:12:21.429 } 00:12:21.429 ] 00:12:21.429 }' 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.429 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.996 [2024-11-20 11:24:29.688201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.996 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.996 "name": "Existed_Raid", 00:12:21.996 "aliases": [ 00:12:21.996 "62bcab74-f41c-42b2-860b-1c08a7bd3bb3" 00:12:21.996 ], 00:12:21.996 "product_name": "Raid Volume", 00:12:21.996 "block_size": 512, 00:12:21.996 "num_blocks": 190464, 00:12:21.996 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:21.996 "assigned_rate_limits": { 00:12:21.996 "rw_ios_per_sec": 0, 00:12:21.996 "rw_mbytes_per_sec": 0, 00:12:21.996 "r_mbytes_per_sec": 0, 00:12:21.997 "w_mbytes_per_sec": 0 00:12:21.997 }, 00:12:21.997 "claimed": false, 00:12:21.997 "zoned": false, 00:12:21.997 "supported_io_types": { 00:12:21.997 "read": true, 00:12:21.997 "write": true, 00:12:21.997 "unmap": true, 00:12:21.997 "flush": true, 00:12:21.997 "reset": true, 00:12:21.997 "nvme_admin": false, 00:12:21.997 "nvme_io": false, 00:12:21.997 "nvme_io_md": false, 00:12:21.997 "write_zeroes": true, 00:12:21.997 "zcopy": false, 00:12:21.997 "get_zone_info": false, 00:12:21.997 "zone_management": false, 00:12:21.997 "zone_append": false, 00:12:21.997 "compare": false, 00:12:21.997 "compare_and_write": false, 00:12:21.997 "abort": false, 00:12:21.997 "seek_hole": false, 00:12:21.997 "seek_data": false, 00:12:21.997 "copy": false, 00:12:21.997 "nvme_iov_md": false 00:12:21.997 }, 00:12:21.997 "memory_domains": [ 00:12:21.997 { 00:12:21.997 "dma_device_id": "system", 00:12:21.997 "dma_device_type": 1 00:12:21.997 }, 00:12:21.997 { 00:12:21.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.997 "dma_device_type": 2 00:12:21.997 }, 00:12:21.997 { 00:12:21.997 "dma_device_id": "system", 00:12:21.997 "dma_device_type": 1 00:12:21.997 }, 00:12:21.997 { 00:12:21.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.997 "dma_device_type": 2 00:12:21.997 }, 00:12:21.997 { 00:12:21.997 "dma_device_id": "system", 00:12:21.997 "dma_device_type": 1 00:12:21.997 }, 00:12:21.997 { 00:12:21.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.997 "dma_device_type": 2 00:12:21.997 } 00:12:21.997 ], 00:12:21.997 "driver_specific": { 00:12:21.997 "raid": { 00:12:21.997 "uuid": "62bcab74-f41c-42b2-860b-1c08a7bd3bb3", 00:12:21.997 "strip_size_kb": 64, 00:12:21.997 "state": "online", 00:12:21.997 "raid_level": "raid0", 00:12:21.997 "superblock": true, 00:12:21.997 "num_base_bdevs": 3, 00:12:21.997 "num_base_bdevs_discovered": 3, 00:12:21.997 "num_base_bdevs_operational": 3, 00:12:21.997 "base_bdevs_list": [ 00:12:21.997 { 00:12:21.997 "name": "NewBaseBdev", 00:12:21.997 "uuid": "50df253d-b9a8-42e7-8885-207b6398b3a2", 00:12:21.997 "is_configured": true, 00:12:21.997 "data_offset": 2048, 00:12:21.997 "data_size": 63488 00:12:21.997 }, 00:12:21.997 { 00:12:21.997 "name": "BaseBdev2", 00:12:21.997 "uuid": "8634a37d-4d73-4be3-b1d9-40b7e9a716f7", 00:12:21.997 "is_configured": true, 00:12:21.997 "data_offset": 2048, 00:12:21.997 "data_size": 63488 00:12:21.997 }, 00:12:21.997 { 00:12:21.997 "name": "BaseBdev3", 00:12:21.997 "uuid": "1e5fb810-5b50-4a71-bb8e-1bd85460c6cb", 00:12:21.997 "is_configured": true, 00:12:21.997 "data_offset": 2048, 00:12:21.997 "data_size": 63488 00:12:21.997 } 00:12:21.997 ] 00:12:21.997 } 00:12:21.997 } 00:12:21.997 }' 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:21.997 BaseBdev2 00:12:21.997 BaseBdev3' 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.997 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.255 11:24:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.255 [2024-11-20 11:24:30.000239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.255 [2024-11-20 11:24:30.000400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.255 [2024-11-20 11:24:30.000648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.255 [2024-11-20 11:24:30.000821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.255 [2024-11-20 11:24:30.000978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:22.255 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64377 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64377 ']' 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64377 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64377 00:12:22.256 killing process with pid 64377 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64377' 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64377 00:12:22.256 [2024-11-20 11:24:30.040681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.256 11:24:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64377 00:12:22.514 [2024-11-20 11:24:30.307941] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.891 ************************************ 00:12:23.891 END TEST raid_state_function_test_sb 00:12:23.891 ************************************ 00:12:23.891 11:24:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:23.891 00:12:23.891 real 0m11.907s 00:12:23.891 user 0m19.856s 00:12:23.891 sys 0m1.571s 00:12:23.891 11:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.891 11:24:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.891 11:24:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:23.891 11:24:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.891 11:24:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.891 11:24:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.891 ************************************ 00:12:23.891 START TEST raid_superblock_test 00:12:23.891 ************************************ 00:12:23.891 11:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:23.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65014 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65014 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65014 ']' 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.892 11:24:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.892 [2024-11-20 11:24:31.471487] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:12:23.892 [2024-11-20 11:24:31.471993] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65014 ] 00:12:23.892 [2024-11-20 11:24:31.683749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.150 [2024-11-20 11:24:31.815846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.409 [2024-11-20 11:24:32.019792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.409 [2024-11-20 11:24:32.019838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.668 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.928 malloc1 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.928 [2024-11-20 11:24:32.518064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:24.928 [2024-11-20 11:24:32.518160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.928 [2024-11-20 11:24:32.518201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:24.928 [2024-11-20 11:24:32.518219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.928 [2024-11-20 11:24:32.521160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.928 [2024-11-20 11:24:32.521206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:24.928 pt1 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.928 malloc2 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.928 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.928 [2024-11-20 11:24:32.573773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.928 [2024-11-20 11:24:32.573983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.928 [2024-11-20 11:24:32.574062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:24.928 [2024-11-20 11:24:32.574288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.928 [2024-11-20 11:24:32.577077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.929 [2024-11-20 11:24:32.577245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.929 pt2 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 malloc3 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 [2024-11-20 11:24:32.637243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.929 [2024-11-20 11:24:32.637312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.929 [2024-11-20 11:24:32.637346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:24.929 [2024-11-20 11:24:32.637362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.929 [2024-11-20 11:24:32.640227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.929 [2024-11-20 11:24:32.640396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.929 pt3 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 [2024-11-20 11:24:32.649390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:24.929 [2024-11-20 11:24:32.651858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.929 [2024-11-20 11:24:32.651952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.929 [2024-11-20 11:24:32.652172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:24.929 [2024-11-20 11:24:32.652196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:24.929 [2024-11-20 11:24:32.652541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:24.929 [2024-11-20 11:24:32.652786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:24.929 [2024-11-20 11:24:32.652805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:24.929 [2024-11-20 11:24:32.653015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.929 "name": "raid_bdev1", 00:12:24.929 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:24.929 "strip_size_kb": 64, 00:12:24.929 "state": "online", 00:12:24.929 "raid_level": "raid0", 00:12:24.929 "superblock": true, 00:12:24.929 "num_base_bdevs": 3, 00:12:24.929 "num_base_bdevs_discovered": 3, 00:12:24.929 "num_base_bdevs_operational": 3, 00:12:24.929 "base_bdevs_list": [ 00:12:24.929 { 00:12:24.929 "name": "pt1", 00:12:24.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.929 "is_configured": true, 00:12:24.929 "data_offset": 2048, 00:12:24.929 "data_size": 63488 00:12:24.929 }, 00:12:24.929 { 00:12:24.929 "name": "pt2", 00:12:24.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.929 "is_configured": true, 00:12:24.929 "data_offset": 2048, 00:12:24.929 "data_size": 63488 00:12:24.929 }, 00:12:24.929 { 00:12:24.929 "name": "pt3", 00:12:24.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.929 "is_configured": true, 00:12:24.929 "data_offset": 2048, 00:12:24.929 "data_size": 63488 00:12:24.929 } 00:12:24.929 ] 00:12:24.929 }' 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.929 11:24:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.498 [2024-11-20 11:24:33.133880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.498 "name": "raid_bdev1", 00:12:25.498 "aliases": [ 00:12:25.498 "251b6265-d83f-44c2-b828-7b59f81cedb3" 00:12:25.498 ], 00:12:25.498 "product_name": "Raid Volume", 00:12:25.498 "block_size": 512, 00:12:25.498 "num_blocks": 190464, 00:12:25.498 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:25.498 "assigned_rate_limits": { 00:12:25.498 "rw_ios_per_sec": 0, 00:12:25.498 "rw_mbytes_per_sec": 0, 00:12:25.498 "r_mbytes_per_sec": 0, 00:12:25.498 "w_mbytes_per_sec": 0 00:12:25.498 }, 00:12:25.498 "claimed": false, 00:12:25.498 "zoned": false, 00:12:25.498 "supported_io_types": { 00:12:25.498 "read": true, 00:12:25.498 "write": true, 00:12:25.498 "unmap": true, 00:12:25.498 "flush": true, 00:12:25.498 "reset": true, 00:12:25.498 "nvme_admin": false, 00:12:25.498 "nvme_io": false, 00:12:25.498 "nvme_io_md": false, 00:12:25.498 "write_zeroes": true, 00:12:25.498 "zcopy": false, 00:12:25.498 "get_zone_info": false, 00:12:25.498 "zone_management": false, 00:12:25.498 "zone_append": false, 00:12:25.498 "compare": false, 00:12:25.498 "compare_and_write": false, 00:12:25.498 "abort": false, 00:12:25.498 "seek_hole": false, 00:12:25.498 "seek_data": false, 00:12:25.498 "copy": false, 00:12:25.498 "nvme_iov_md": false 00:12:25.498 }, 00:12:25.498 "memory_domains": [ 00:12:25.498 { 00:12:25.498 "dma_device_id": "system", 00:12:25.498 "dma_device_type": 1 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.498 "dma_device_type": 2 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "dma_device_id": "system", 00:12:25.498 "dma_device_type": 1 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.498 "dma_device_type": 2 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "dma_device_id": "system", 00:12:25.498 "dma_device_type": 1 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.498 "dma_device_type": 2 00:12:25.498 } 00:12:25.498 ], 00:12:25.498 "driver_specific": { 00:12:25.498 "raid": { 00:12:25.498 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:25.498 "strip_size_kb": 64, 00:12:25.498 "state": "online", 00:12:25.498 "raid_level": "raid0", 00:12:25.498 "superblock": true, 00:12:25.498 "num_base_bdevs": 3, 00:12:25.498 "num_base_bdevs_discovered": 3, 00:12:25.498 "num_base_bdevs_operational": 3, 00:12:25.498 "base_bdevs_list": [ 00:12:25.498 { 00:12:25.498 "name": "pt1", 00:12:25.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.498 "is_configured": true, 00:12:25.498 "data_offset": 2048, 00:12:25.498 "data_size": 63488 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "name": "pt2", 00:12:25.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.498 "is_configured": true, 00:12:25.498 "data_offset": 2048, 00:12:25.498 "data_size": 63488 00:12:25.498 }, 00:12:25.498 { 00:12:25.498 "name": "pt3", 00:12:25.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.498 "is_configured": true, 00:12:25.498 "data_offset": 2048, 00:12:25.498 "data_size": 63488 00:12:25.498 } 00:12:25.498 ] 00:12:25.498 } 00:12:25.498 } 00:12:25.498 }' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:25.498 pt2 00:12:25.498 pt3' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.498 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 [2024-11-20 11:24:33.429868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=251b6265-d83f-44c2-b828-7b59f81cedb3 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 251b6265-d83f-44c2-b828-7b59f81cedb3 ']' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 [2024-11-20 11:24:33.473509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.757 [2024-11-20 11:24:33.473544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.757 [2024-11-20 11:24:33.473659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.757 [2024-11-20 11:24:33.473768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.757 [2024-11-20 11:24:33.473785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.757 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.017 [2024-11-20 11:24:33.625662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:26.017 [2024-11-20 11:24:33.628162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:26.017 [2024-11-20 11:24:33.628227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:26.017 [2024-11-20 11:24:33.628298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:26.017 [2024-11-20 11:24:33.628372] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:26.017 [2024-11-20 11:24:33.628407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:26.017 [2024-11-20 11:24:33.628436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.017 [2024-11-20 11:24:33.628452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:26.017 request: 00:12:26.017 { 00:12:26.017 "name": "raid_bdev1", 00:12:26.017 "raid_level": "raid0", 00:12:26.017 "base_bdevs": [ 00:12:26.017 "malloc1", 00:12:26.017 "malloc2", 00:12:26.017 "malloc3" 00:12:26.017 ], 00:12:26.017 "strip_size_kb": 64, 00:12:26.017 "superblock": false, 00:12:26.017 "method": "bdev_raid_create", 00:12:26.017 "req_id": 1 00:12:26.017 } 00:12:26.017 Got JSON-RPC error response 00:12:26.017 response: 00:12:26.017 { 00:12:26.017 "code": -17, 00:12:26.017 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:26.017 } 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.017 [2024-11-20 11:24:33.697605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.017 [2024-11-20 11:24:33.697841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.017 [2024-11-20 11:24:33.697886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:26.017 [2024-11-20 11:24:33.697903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.017 [2024-11-20 11:24:33.700919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.017 [2024-11-20 11:24:33.700964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.017 [2024-11-20 11:24:33.701076] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:26.017 [2024-11-20 11:24:33.701145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.017 pt1 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.017 "name": "raid_bdev1", 00:12:26.017 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:26.017 "strip_size_kb": 64, 00:12:26.017 "state": "configuring", 00:12:26.017 "raid_level": "raid0", 00:12:26.017 "superblock": true, 00:12:26.017 "num_base_bdevs": 3, 00:12:26.017 "num_base_bdevs_discovered": 1, 00:12:26.017 "num_base_bdevs_operational": 3, 00:12:26.017 "base_bdevs_list": [ 00:12:26.017 { 00:12:26.017 "name": "pt1", 00:12:26.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.017 "is_configured": true, 00:12:26.017 "data_offset": 2048, 00:12:26.017 "data_size": 63488 00:12:26.017 }, 00:12:26.017 { 00:12:26.017 "name": null, 00:12:26.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.017 "is_configured": false, 00:12:26.017 "data_offset": 2048, 00:12:26.017 "data_size": 63488 00:12:26.017 }, 00:12:26.017 { 00:12:26.017 "name": null, 00:12:26.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.017 "is_configured": false, 00:12:26.017 "data_offset": 2048, 00:12:26.017 "data_size": 63488 00:12:26.017 } 00:12:26.017 ] 00:12:26.017 }' 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.017 11:24:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 [2024-11-20 11:24:34.213821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.585 [2024-11-20 11:24:34.213898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.585 [2024-11-20 11:24:34.213933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:26.585 [2024-11-20 11:24:34.213948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.585 [2024-11-20 11:24:34.214528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.585 [2024-11-20 11:24:34.214591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.585 [2024-11-20 11:24:34.214712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:26.585 [2024-11-20 11:24:34.214751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.585 pt2 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 [2024-11-20 11:24:34.221794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.585 "name": "raid_bdev1", 00:12:26.585 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:26.585 "strip_size_kb": 64, 00:12:26.585 "state": "configuring", 00:12:26.585 "raid_level": "raid0", 00:12:26.585 "superblock": true, 00:12:26.585 "num_base_bdevs": 3, 00:12:26.585 "num_base_bdevs_discovered": 1, 00:12:26.585 "num_base_bdevs_operational": 3, 00:12:26.585 "base_bdevs_list": [ 00:12:26.585 { 00:12:26.585 "name": "pt1", 00:12:26.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.585 "is_configured": true, 00:12:26.585 "data_offset": 2048, 00:12:26.585 "data_size": 63488 00:12:26.585 }, 00:12:26.585 { 00:12:26.585 "name": null, 00:12:26.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.585 "is_configured": false, 00:12:26.585 "data_offset": 0, 00:12:26.585 "data_size": 63488 00:12:26.585 }, 00:12:26.585 { 00:12:26.585 "name": null, 00:12:26.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.585 "is_configured": false, 00:12:26.585 "data_offset": 2048, 00:12:26.585 "data_size": 63488 00:12:26.585 } 00:12:26.585 ] 00:12:26.585 }' 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.585 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.153 [2024-11-20 11:24:34.725928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.153 [2024-11-20 11:24:34.726010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.153 [2024-11-20 11:24:34.726037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:27.153 [2024-11-20 11:24:34.726066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.153 [2024-11-20 11:24:34.726675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.153 [2024-11-20 11:24:34.726706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.153 [2024-11-20 11:24:34.726817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.153 [2024-11-20 11:24:34.726854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.153 pt2 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.153 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.154 [2024-11-20 11:24:34.733911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:27.154 [2024-11-20 11:24:34.733969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.154 [2024-11-20 11:24:34.733991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.154 [2024-11-20 11:24:34.734013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.154 [2024-11-20 11:24:34.734480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.154 [2024-11-20 11:24:34.734520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:27.154 [2024-11-20 11:24:34.734601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:27.154 [2024-11-20 11:24:34.734657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.154 [2024-11-20 11:24:34.734811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.154 [2024-11-20 11:24:34.734839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:27.154 [2024-11-20 11:24:34.735151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:27.154 [2024-11-20 11:24:34.735330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.154 [2024-11-20 11:24:34.735351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.154 [2024-11-20 11:24:34.735516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.154 pt3 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.154 "name": "raid_bdev1", 00:12:27.154 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:27.154 "strip_size_kb": 64, 00:12:27.154 "state": "online", 00:12:27.154 "raid_level": "raid0", 00:12:27.154 "superblock": true, 00:12:27.154 "num_base_bdevs": 3, 00:12:27.154 "num_base_bdevs_discovered": 3, 00:12:27.154 "num_base_bdevs_operational": 3, 00:12:27.154 "base_bdevs_list": [ 00:12:27.154 { 00:12:27.154 "name": "pt1", 00:12:27.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.154 "is_configured": true, 00:12:27.154 "data_offset": 2048, 00:12:27.154 "data_size": 63488 00:12:27.154 }, 00:12:27.154 { 00:12:27.154 "name": "pt2", 00:12:27.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.154 "is_configured": true, 00:12:27.154 "data_offset": 2048, 00:12:27.154 "data_size": 63488 00:12:27.154 }, 00:12:27.154 { 00:12:27.154 "name": "pt3", 00:12:27.154 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.154 "is_configured": true, 00:12:27.154 "data_offset": 2048, 00:12:27.154 "data_size": 63488 00:12:27.154 } 00:12:27.154 ] 00:12:27.154 }' 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.154 11:24:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.413 [2024-11-20 11:24:35.218516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.413 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.672 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.672 "name": "raid_bdev1", 00:12:27.672 "aliases": [ 00:12:27.672 "251b6265-d83f-44c2-b828-7b59f81cedb3" 00:12:27.672 ], 00:12:27.672 "product_name": "Raid Volume", 00:12:27.672 "block_size": 512, 00:12:27.672 "num_blocks": 190464, 00:12:27.672 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:27.672 "assigned_rate_limits": { 00:12:27.672 "rw_ios_per_sec": 0, 00:12:27.672 "rw_mbytes_per_sec": 0, 00:12:27.672 "r_mbytes_per_sec": 0, 00:12:27.672 "w_mbytes_per_sec": 0 00:12:27.672 }, 00:12:27.672 "claimed": false, 00:12:27.672 "zoned": false, 00:12:27.672 "supported_io_types": { 00:12:27.672 "read": true, 00:12:27.672 "write": true, 00:12:27.672 "unmap": true, 00:12:27.672 "flush": true, 00:12:27.672 "reset": true, 00:12:27.672 "nvme_admin": false, 00:12:27.672 "nvme_io": false, 00:12:27.672 "nvme_io_md": false, 00:12:27.672 "write_zeroes": true, 00:12:27.672 "zcopy": false, 00:12:27.672 "get_zone_info": false, 00:12:27.672 "zone_management": false, 00:12:27.672 "zone_append": false, 00:12:27.672 "compare": false, 00:12:27.672 "compare_and_write": false, 00:12:27.672 "abort": false, 00:12:27.672 "seek_hole": false, 00:12:27.672 "seek_data": false, 00:12:27.672 "copy": false, 00:12:27.672 "nvme_iov_md": false 00:12:27.672 }, 00:12:27.672 "memory_domains": [ 00:12:27.672 { 00:12:27.672 "dma_device_id": "system", 00:12:27.672 "dma_device_type": 1 00:12:27.672 }, 00:12:27.672 { 00:12:27.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.672 "dma_device_type": 2 00:12:27.672 }, 00:12:27.672 { 00:12:27.672 "dma_device_id": "system", 00:12:27.672 "dma_device_type": 1 00:12:27.672 }, 00:12:27.672 { 00:12:27.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.672 "dma_device_type": 2 00:12:27.672 }, 00:12:27.672 { 00:12:27.672 "dma_device_id": "system", 00:12:27.672 "dma_device_type": 1 00:12:27.672 }, 00:12:27.672 { 00:12:27.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.672 "dma_device_type": 2 00:12:27.672 } 00:12:27.672 ], 00:12:27.672 "driver_specific": { 00:12:27.672 "raid": { 00:12:27.672 "uuid": "251b6265-d83f-44c2-b828-7b59f81cedb3", 00:12:27.672 "strip_size_kb": 64, 00:12:27.672 "state": "online", 00:12:27.672 "raid_level": "raid0", 00:12:27.672 "superblock": true, 00:12:27.672 "num_base_bdevs": 3, 00:12:27.672 "num_base_bdevs_discovered": 3, 00:12:27.672 "num_base_bdevs_operational": 3, 00:12:27.673 "base_bdevs_list": [ 00:12:27.673 { 00:12:27.673 "name": "pt1", 00:12:27.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.673 "is_configured": true, 00:12:27.673 "data_offset": 2048, 00:12:27.673 "data_size": 63488 00:12:27.673 }, 00:12:27.673 { 00:12:27.673 "name": "pt2", 00:12:27.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.673 "is_configured": true, 00:12:27.673 "data_offset": 2048, 00:12:27.673 "data_size": 63488 00:12:27.673 }, 00:12:27.673 { 00:12:27.673 "name": "pt3", 00:12:27.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.673 "is_configured": true, 00:12:27.673 "data_offset": 2048, 00:12:27.673 "data_size": 63488 00:12:27.673 } 00:12:27.673 ] 00:12:27.673 } 00:12:27.673 } 00:12:27.673 }' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.673 pt2 00:12:27.673 pt3' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.673 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.993 [2024-11-20 11:24:35.534535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 251b6265-d83f-44c2-b828-7b59f81cedb3 '!=' 251b6265-d83f-44c2-b828-7b59f81cedb3 ']' 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65014 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65014 ']' 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65014 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65014 00:12:27.993 killing process with pid 65014 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65014' 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65014 00:12:27.993 [2024-11-20 11:24:35.612468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.993 11:24:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65014 00:12:27.993 [2024-11-20 11:24:35.612584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.993 [2024-11-20 11:24:35.612681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.993 [2024-11-20 11:24:35.612704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:28.252 [2024-11-20 11:24:35.873799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.190 ************************************ 00:12:29.190 END TEST raid_superblock_test 00:12:29.190 ************************************ 00:12:29.190 11:24:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:29.190 00:12:29.190 real 0m5.499s 00:12:29.190 user 0m8.262s 00:12:29.190 sys 0m0.797s 00:12:29.190 11:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.190 11:24:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 11:24:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:29.190 11:24:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:29.190 11:24:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.190 11:24:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 ************************************ 00:12:29.190 START TEST raid_read_error_test 00:12:29.190 ************************************ 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hqxoVtikS6 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65267 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:29.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65267 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65267 ']' 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.190 11:24:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.449 [2024-11-20 11:24:37.052856] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:12:29.449 [2024-11-20 11:24:37.053308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65267 ] 00:12:29.449 [2024-11-20 11:24:37.248873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.708 [2024-11-20 11:24:37.411695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.967 [2024-11-20 11:24:37.627943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.967 [2024-11-20 11:24:37.627999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.227 BaseBdev1_malloc 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.227 true 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.227 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.228 [2024-11-20 11:24:38.071353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:30.487 [2024-11-20 11:24:38.071596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.487 [2024-11-20 11:24:38.071663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:30.487 [2024-11-20 11:24:38.071689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.487 [2024-11-20 11:24:38.074590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.487 [2024-11-20 11:24:38.074800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.487 BaseBdev1 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 BaseBdev2_malloc 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 true 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 [2024-11-20 11:24:38.132334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:30.487 [2024-11-20 11:24:38.132424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.487 [2024-11-20 11:24:38.132451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:30.487 [2024-11-20 11:24:38.132468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.487 [2024-11-20 11:24:38.135546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.487 [2024-11-20 11:24:38.135776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.487 BaseBdev2 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 BaseBdev3_malloc 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 true 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 [2024-11-20 11:24:38.207039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:30.487 [2024-11-20 11:24:38.207110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.487 [2024-11-20 11:24:38.207150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:30.487 [2024-11-20 11:24:38.207168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.487 [2024-11-20 11:24:38.210219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.487 [2024-11-20 11:24:38.210388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:30.487 BaseBdev3 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 [2024-11-20 11:24:38.219167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.487 [2024-11-20 11:24:38.221580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.487 [2024-11-20 11:24:38.221749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.487 [2024-11-20 11:24:38.222018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:30.487 [2024-11-20 11:24:38.222052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:30.487 [2024-11-20 11:24:38.222367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:30.487 [2024-11-20 11:24:38.222581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:30.487 [2024-11-20 11:24:38.222604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:30.487 [2024-11-20 11:24:38.222810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.487 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.488 "name": "raid_bdev1", 00:12:30.488 "uuid": "72d10874-d40f-4f86-b164-20faf5ad9f8b", 00:12:30.488 "strip_size_kb": 64, 00:12:30.488 "state": "online", 00:12:30.488 "raid_level": "raid0", 00:12:30.488 "superblock": true, 00:12:30.488 "num_base_bdevs": 3, 00:12:30.488 "num_base_bdevs_discovered": 3, 00:12:30.488 "num_base_bdevs_operational": 3, 00:12:30.488 "base_bdevs_list": [ 00:12:30.488 { 00:12:30.488 "name": "BaseBdev1", 00:12:30.488 "uuid": "43d21607-04cb-5ded-9a14-ce7b356c2151", 00:12:30.488 "is_configured": true, 00:12:30.488 "data_offset": 2048, 00:12:30.488 "data_size": 63488 00:12:30.488 }, 00:12:30.488 { 00:12:30.488 "name": "BaseBdev2", 00:12:30.488 "uuid": "16054b52-69cb-5486-a0c5-a2559a95c142", 00:12:30.488 "is_configured": true, 00:12:30.488 "data_offset": 2048, 00:12:30.488 "data_size": 63488 00:12:30.488 }, 00:12:30.488 { 00:12:30.488 "name": "BaseBdev3", 00:12:30.488 "uuid": "84046a21-ec66-5838-baee-93d960743e84", 00:12:30.488 "is_configured": true, 00:12:30.488 "data_offset": 2048, 00:12:30.488 "data_size": 63488 00:12:30.488 } 00:12:30.488 ] 00:12:30.488 }' 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.488 11:24:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.054 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:31.054 11:24:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:31.054 [2024-11-20 11:24:38.868755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.992 "name": "raid_bdev1", 00:12:31.992 "uuid": "72d10874-d40f-4f86-b164-20faf5ad9f8b", 00:12:31.992 "strip_size_kb": 64, 00:12:31.992 "state": "online", 00:12:31.992 "raid_level": "raid0", 00:12:31.992 "superblock": true, 00:12:31.992 "num_base_bdevs": 3, 00:12:31.992 "num_base_bdevs_discovered": 3, 00:12:31.992 "num_base_bdevs_operational": 3, 00:12:31.992 "base_bdevs_list": [ 00:12:31.992 { 00:12:31.992 "name": "BaseBdev1", 00:12:31.992 "uuid": "43d21607-04cb-5ded-9a14-ce7b356c2151", 00:12:31.992 "is_configured": true, 00:12:31.992 "data_offset": 2048, 00:12:31.992 "data_size": 63488 00:12:31.992 }, 00:12:31.992 { 00:12:31.992 "name": "BaseBdev2", 00:12:31.992 "uuid": "16054b52-69cb-5486-a0c5-a2559a95c142", 00:12:31.992 "is_configured": true, 00:12:31.992 "data_offset": 2048, 00:12:31.992 "data_size": 63488 00:12:31.992 }, 00:12:31.992 { 00:12:31.992 "name": "BaseBdev3", 00:12:31.992 "uuid": "84046a21-ec66-5838-baee-93d960743e84", 00:12:31.992 "is_configured": true, 00:12:31.992 "data_offset": 2048, 00:12:31.992 "data_size": 63488 00:12:31.992 } 00:12:31.992 ] 00:12:31.992 }' 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.992 11:24:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.562 [2024-11-20 11:24:40.299735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.562 [2024-11-20 11:24:40.299773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.562 [2024-11-20 11:24:40.303133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.562 [2024-11-20 11:24:40.303194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.562 [2024-11-20 11:24:40.303256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.562 [2024-11-20 11:24:40.303273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:32.562 { 00:12:32.562 "results": [ 00:12:32.562 { 00:12:32.562 "job": "raid_bdev1", 00:12:32.562 "core_mask": "0x1", 00:12:32.562 "workload": "randrw", 00:12:32.562 "percentage": 50, 00:12:32.562 "status": "finished", 00:12:32.562 "queue_depth": 1, 00:12:32.562 "io_size": 131072, 00:12:32.562 "runtime": 1.428575, 00:12:32.562 "iops": 10652.573368566578, 00:12:32.562 "mibps": 1331.5716710708223, 00:12:32.562 "io_failed": 1, 00:12:32.562 "io_timeout": 0, 00:12:32.562 "avg_latency_us": 131.15547383951878, 00:12:32.562 "min_latency_us": 39.79636363636364, 00:12:32.562 "max_latency_us": 1846.9236363636364 00:12:32.562 } 00:12:32.562 ], 00:12:32.562 "core_count": 1 00:12:32.562 } 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65267 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65267 ']' 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65267 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65267 00:12:32.562 killing process with pid 65267 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65267' 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65267 00:12:32.562 [2024-11-20 11:24:40.347137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.562 11:24:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65267 00:12:32.821 [2024-11-20 11:24:40.552037] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hqxoVtikS6 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:34.200 ************************************ 00:12:34.200 END TEST raid_read_error_test 00:12:34.200 ************************************ 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:34.200 00:12:34.200 real 0m4.721s 00:12:34.200 user 0m5.844s 00:12:34.200 sys 0m0.587s 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.200 11:24:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.200 11:24:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:34.200 11:24:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:34.200 11:24:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.200 11:24:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.201 ************************************ 00:12:34.201 START TEST raid_write_error_test 00:12:34.201 ************************************ 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iG2cNzJxKk 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65417 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65417 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65417 ']' 00:12:34.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.201 11:24:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.201 [2024-11-20 11:24:41.822784] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:12:34.201 [2024-11-20 11:24:41.823176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65417 ] 00:12:34.201 [2024-11-20 11:24:42.008752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.459 [2024-11-20 11:24:42.139614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.718 [2024-11-20 11:24:42.366190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.718 [2024-11-20 11:24:42.366516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.977 BaseBdev1_malloc 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.977 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.236 true 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.236 [2024-11-20 11:24:42.825913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:35.236 [2024-11-20 11:24:42.825983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.236 [2024-11-20 11:24:42.826014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:35.236 [2024-11-20 11:24:42.826033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.236 [2024-11-20 11:24:42.829201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.236 [2024-11-20 11:24:42.829389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.236 BaseBdev1 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.236 BaseBdev2_malloc 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.236 true 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.236 [2024-11-20 11:24:42.887168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:35.236 [2024-11-20 11:24:42.887240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.236 [2024-11-20 11:24:42.887268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:35.236 [2024-11-20 11:24:42.887286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.236 [2024-11-20 11:24:42.890104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.236 [2024-11-20 11:24:42.890161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.236 BaseBdev2 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.236 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.236 BaseBdev3_malloc 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.237 true 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.237 [2024-11-20 11:24:42.973779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:35.237 [2024-11-20 11:24:42.973870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.237 [2024-11-20 11:24:42.973909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:35.237 [2024-11-20 11:24:42.973942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.237 [2024-11-20 11:24:42.977265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.237 [2024-11-20 11:24:42.977454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.237 BaseBdev3 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.237 [2024-11-20 11:24:42.981862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.237 [2024-11-20 11:24:42.984365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.237 [2024-11-20 11:24:42.984481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.237 [2024-11-20 11:24:42.984775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:35.237 [2024-11-20 11:24:42.984798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:35.237 [2024-11-20 11:24:42.985121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:35.237 [2024-11-20 11:24:42.985354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:35.237 [2024-11-20 11:24:42.985379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:35.237 [2024-11-20 11:24:42.985565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.237 11:24:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.237 11:24:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.237 11:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.237 "name": "raid_bdev1", 00:12:35.237 "uuid": "a555ce35-5ca9-47bb-a299-6b08ddd26c25", 00:12:35.237 "strip_size_kb": 64, 00:12:35.237 "state": "online", 00:12:35.237 "raid_level": "raid0", 00:12:35.237 "superblock": true, 00:12:35.237 "num_base_bdevs": 3, 00:12:35.237 "num_base_bdevs_discovered": 3, 00:12:35.237 "num_base_bdevs_operational": 3, 00:12:35.237 "base_bdevs_list": [ 00:12:35.237 { 00:12:35.237 "name": "BaseBdev1", 00:12:35.237 "uuid": "c20cacef-bcb7-53b9-a889-ade6acd8b445", 00:12:35.237 "is_configured": true, 00:12:35.237 "data_offset": 2048, 00:12:35.237 "data_size": 63488 00:12:35.237 }, 00:12:35.237 { 00:12:35.237 "name": "BaseBdev2", 00:12:35.237 "uuid": "fdad0455-fce6-5b52-853e-34390ce6438e", 00:12:35.237 "is_configured": true, 00:12:35.237 "data_offset": 2048, 00:12:35.237 "data_size": 63488 00:12:35.237 }, 00:12:35.237 { 00:12:35.237 "name": "BaseBdev3", 00:12:35.237 "uuid": "0a3b7c77-9028-5e12-bee7-30d9137d5060", 00:12:35.237 "is_configured": true, 00:12:35.237 "data_offset": 2048, 00:12:35.237 "data_size": 63488 00:12:35.237 } 00:12:35.237 ] 00:12:35.237 }' 00:12:35.237 11:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.237 11:24:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.869 11:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:35.869 11:24:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:35.869 [2024-11-20 11:24:43.563457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.803 "name": "raid_bdev1", 00:12:36.803 "uuid": "a555ce35-5ca9-47bb-a299-6b08ddd26c25", 00:12:36.803 "strip_size_kb": 64, 00:12:36.803 "state": "online", 00:12:36.803 "raid_level": "raid0", 00:12:36.803 "superblock": true, 00:12:36.803 "num_base_bdevs": 3, 00:12:36.803 "num_base_bdevs_discovered": 3, 00:12:36.803 "num_base_bdevs_operational": 3, 00:12:36.803 "base_bdevs_list": [ 00:12:36.803 { 00:12:36.803 "name": "BaseBdev1", 00:12:36.803 "uuid": "c20cacef-bcb7-53b9-a889-ade6acd8b445", 00:12:36.803 "is_configured": true, 00:12:36.803 "data_offset": 2048, 00:12:36.803 "data_size": 63488 00:12:36.803 }, 00:12:36.803 { 00:12:36.803 "name": "BaseBdev2", 00:12:36.803 "uuid": "fdad0455-fce6-5b52-853e-34390ce6438e", 00:12:36.803 "is_configured": true, 00:12:36.803 "data_offset": 2048, 00:12:36.803 "data_size": 63488 00:12:36.803 }, 00:12:36.803 { 00:12:36.803 "name": "BaseBdev3", 00:12:36.803 "uuid": "0a3b7c77-9028-5e12-bee7-30d9137d5060", 00:12:36.803 "is_configured": true, 00:12:36.803 "data_offset": 2048, 00:12:36.803 "data_size": 63488 00:12:36.803 } 00:12:36.803 ] 00:12:36.803 }' 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.803 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.369 [2024-11-20 11:24:44.966000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.369 [2024-11-20 11:24:44.966036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.369 [2024-11-20 11:24:44.969326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.369 [2024-11-20 11:24:44.969386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.369 [2024-11-20 11:24:44.969440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.369 [2024-11-20 11:24:44.969456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:37.369 { 00:12:37.369 "results": [ 00:12:37.369 { 00:12:37.369 "job": "raid_bdev1", 00:12:37.369 "core_mask": "0x1", 00:12:37.369 "workload": "randrw", 00:12:37.369 "percentage": 50, 00:12:37.369 "status": "finished", 00:12:37.369 "queue_depth": 1, 00:12:37.369 "io_size": 131072, 00:12:37.369 "runtime": 1.399966, 00:12:37.369 "iops": 10641.687012398872, 00:12:37.369 "mibps": 1330.210876549859, 00:12:37.369 "io_failed": 1, 00:12:37.369 "io_timeout": 0, 00:12:37.369 "avg_latency_us": 131.31061926059712, 00:12:37.369 "min_latency_us": 41.89090909090909, 00:12:37.369 "max_latency_us": 1824.581818181818 00:12:37.369 } 00:12:37.369 ], 00:12:37.369 "core_count": 1 00:12:37.369 } 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65417 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65417 ']' 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65417 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.369 11:24:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65417 00:12:37.369 killing process with pid 65417 00:12:37.369 11:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.369 11:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.369 11:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65417' 00:12:37.369 11:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65417 00:12:37.369 [2024-11-20 11:24:45.009322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.369 11:24:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65417 00:12:37.627 [2024-11-20 11:24:45.215203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iG2cNzJxKk 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:38.562 00:12:38.562 real 0m4.603s 00:12:38.562 user 0m5.616s 00:12:38.562 sys 0m0.565s 00:12:38.562 ************************************ 00:12:38.562 END TEST raid_write_error_test 00:12:38.562 ************************************ 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.562 11:24:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.562 11:24:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:38.562 11:24:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:38.562 11:24:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:38.562 11:24:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.562 11:24:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.562 ************************************ 00:12:38.562 START TEST raid_state_function_test 00:12:38.562 ************************************ 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65556 00:12:38.562 Process raid pid: 65556 00:12:38.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65556' 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65556 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65556 ']' 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.562 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.821 [2024-11-20 11:24:46.487356] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:12:38.821 [2024-11-20 11:24:46.487533] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.085 [2024-11-20 11:24:46.670419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.085 [2024-11-20 11:24:46.824500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.350 [2024-11-20 11:24:47.034359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.350 [2024-11-20 11:24:47.034626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.917 [2024-11-20 11:24:47.465025] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.917 [2024-11-20 11:24:47.465091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.917 [2024-11-20 11:24:47.465109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.917 [2024-11-20 11:24:47.465127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.917 [2024-11-20 11:24:47.465137] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.917 [2024-11-20 11:24:47.465152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.917 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.918 "name": "Existed_Raid", 00:12:39.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.918 "strip_size_kb": 64, 00:12:39.918 "state": "configuring", 00:12:39.918 "raid_level": "concat", 00:12:39.918 "superblock": false, 00:12:39.918 "num_base_bdevs": 3, 00:12:39.918 "num_base_bdevs_discovered": 0, 00:12:39.918 "num_base_bdevs_operational": 3, 00:12:39.918 "base_bdevs_list": [ 00:12:39.918 { 00:12:39.918 "name": "BaseBdev1", 00:12:39.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.918 "is_configured": false, 00:12:39.918 "data_offset": 0, 00:12:39.918 "data_size": 0 00:12:39.918 }, 00:12:39.918 { 00:12:39.918 "name": "BaseBdev2", 00:12:39.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.918 "is_configured": false, 00:12:39.918 "data_offset": 0, 00:12:39.918 "data_size": 0 00:12:39.918 }, 00:12:39.918 { 00:12:39.918 "name": "BaseBdev3", 00:12:39.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.918 "is_configured": false, 00:12:39.918 "data_offset": 0, 00:12:39.918 "data_size": 0 00:12:39.918 } 00:12:39.918 ] 00:12:39.918 }' 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.918 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.177 [2024-11-20 11:24:47.933171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.177 [2024-11-20 11:24:47.933225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.177 [2024-11-20 11:24:47.941115] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.177 [2024-11-20 11:24:47.941186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.177 [2024-11-20 11:24:47.941202] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.177 [2024-11-20 11:24:47.941219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.177 [2024-11-20 11:24:47.941229] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.177 [2024-11-20 11:24:47.941243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.177 [2024-11-20 11:24:47.987531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.177 BaseBdev1 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.177 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.177 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.177 [ 00:12:40.177 { 00:12:40.177 "name": "BaseBdev1", 00:12:40.177 "aliases": [ 00:12:40.177 "ae20e6ed-7be5-4ca7-a4c7-df05f59fc7da" 00:12:40.177 ], 00:12:40.177 "product_name": "Malloc disk", 00:12:40.177 "block_size": 512, 00:12:40.177 "num_blocks": 65536, 00:12:40.177 "uuid": "ae20e6ed-7be5-4ca7-a4c7-df05f59fc7da", 00:12:40.177 "assigned_rate_limits": { 00:12:40.177 "rw_ios_per_sec": 0, 00:12:40.177 "rw_mbytes_per_sec": 0, 00:12:40.177 "r_mbytes_per_sec": 0, 00:12:40.177 "w_mbytes_per_sec": 0 00:12:40.177 }, 00:12:40.177 "claimed": true, 00:12:40.177 "claim_type": "exclusive_write", 00:12:40.177 "zoned": false, 00:12:40.177 "supported_io_types": { 00:12:40.177 "read": true, 00:12:40.177 "write": true, 00:12:40.177 "unmap": true, 00:12:40.177 "flush": true, 00:12:40.177 "reset": true, 00:12:40.177 "nvme_admin": false, 00:12:40.177 "nvme_io": false, 00:12:40.177 "nvme_io_md": false, 00:12:40.177 "write_zeroes": true, 00:12:40.177 "zcopy": true, 00:12:40.177 "get_zone_info": false, 00:12:40.177 "zone_management": false, 00:12:40.177 "zone_append": false, 00:12:40.177 "compare": false, 00:12:40.177 "compare_and_write": false, 00:12:40.177 "abort": true, 00:12:40.177 "seek_hole": false, 00:12:40.177 "seek_data": false, 00:12:40.177 "copy": true, 00:12:40.177 "nvme_iov_md": false 00:12:40.177 }, 00:12:40.177 "memory_domains": [ 00:12:40.177 { 00:12:40.177 "dma_device_id": "system", 00:12:40.177 "dma_device_type": 1 00:12:40.177 }, 00:12:40.177 { 00:12:40.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.177 "dma_device_type": 2 00:12:40.177 } 00:12:40.436 ], 00:12:40.436 "driver_specific": {} 00:12:40.436 } 00:12:40.436 ] 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.436 "name": "Existed_Raid", 00:12:40.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.436 "strip_size_kb": 64, 00:12:40.436 "state": "configuring", 00:12:40.436 "raid_level": "concat", 00:12:40.436 "superblock": false, 00:12:40.436 "num_base_bdevs": 3, 00:12:40.436 "num_base_bdevs_discovered": 1, 00:12:40.436 "num_base_bdevs_operational": 3, 00:12:40.436 "base_bdevs_list": [ 00:12:40.436 { 00:12:40.436 "name": "BaseBdev1", 00:12:40.436 "uuid": "ae20e6ed-7be5-4ca7-a4c7-df05f59fc7da", 00:12:40.436 "is_configured": true, 00:12:40.436 "data_offset": 0, 00:12:40.436 "data_size": 65536 00:12:40.436 }, 00:12:40.436 { 00:12:40.436 "name": "BaseBdev2", 00:12:40.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.436 "is_configured": false, 00:12:40.436 "data_offset": 0, 00:12:40.436 "data_size": 0 00:12:40.436 }, 00:12:40.436 { 00:12:40.436 "name": "BaseBdev3", 00:12:40.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.436 "is_configured": false, 00:12:40.436 "data_offset": 0, 00:12:40.436 "data_size": 0 00:12:40.436 } 00:12:40.436 ] 00:12:40.436 }' 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.436 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.695 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.695 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.695 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.695 [2024-11-20 11:24:48.527853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.695 [2024-11-20 11:24:48.527919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:40.695 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.695 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.695 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.695 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.695 [2024-11-20 11:24:48.535896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.695 [2024-11-20 11:24:48.538484] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.695 [2024-11-20 11:24:48.538549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.695 [2024-11-20 11:24:48.538566] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.695 [2024-11-20 11:24:48.538583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.953 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.953 "name": "Existed_Raid", 00:12:40.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.953 "strip_size_kb": 64, 00:12:40.953 "state": "configuring", 00:12:40.953 "raid_level": "concat", 00:12:40.953 "superblock": false, 00:12:40.953 "num_base_bdevs": 3, 00:12:40.953 "num_base_bdevs_discovered": 1, 00:12:40.953 "num_base_bdevs_operational": 3, 00:12:40.953 "base_bdevs_list": [ 00:12:40.953 { 00:12:40.953 "name": "BaseBdev1", 00:12:40.953 "uuid": "ae20e6ed-7be5-4ca7-a4c7-df05f59fc7da", 00:12:40.953 "is_configured": true, 00:12:40.953 "data_offset": 0, 00:12:40.953 "data_size": 65536 00:12:40.953 }, 00:12:40.953 { 00:12:40.953 "name": "BaseBdev2", 00:12:40.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.953 "is_configured": false, 00:12:40.953 "data_offset": 0, 00:12:40.953 "data_size": 0 00:12:40.953 }, 00:12:40.953 { 00:12:40.953 "name": "BaseBdev3", 00:12:40.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.954 "is_configured": false, 00:12:40.954 "data_offset": 0, 00:12:40.954 "data_size": 0 00:12:40.954 } 00:12:40.954 ] 00:12:40.954 }' 00:12:40.954 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.954 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.212 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:41.212 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.212 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.471 [2024-11-20 11:24:49.093843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.471 BaseBdev2 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.471 [ 00:12:41.471 { 00:12:41.471 "name": "BaseBdev2", 00:12:41.471 "aliases": [ 00:12:41.471 "630ad9b8-4309-4e43-b008-d1ae79191043" 00:12:41.471 ], 00:12:41.471 "product_name": "Malloc disk", 00:12:41.471 "block_size": 512, 00:12:41.471 "num_blocks": 65536, 00:12:41.471 "uuid": "630ad9b8-4309-4e43-b008-d1ae79191043", 00:12:41.471 "assigned_rate_limits": { 00:12:41.471 "rw_ios_per_sec": 0, 00:12:41.471 "rw_mbytes_per_sec": 0, 00:12:41.471 "r_mbytes_per_sec": 0, 00:12:41.471 "w_mbytes_per_sec": 0 00:12:41.471 }, 00:12:41.471 "claimed": true, 00:12:41.471 "claim_type": "exclusive_write", 00:12:41.471 "zoned": false, 00:12:41.471 "supported_io_types": { 00:12:41.471 "read": true, 00:12:41.471 "write": true, 00:12:41.471 "unmap": true, 00:12:41.471 "flush": true, 00:12:41.471 "reset": true, 00:12:41.471 "nvme_admin": false, 00:12:41.471 "nvme_io": false, 00:12:41.471 "nvme_io_md": false, 00:12:41.471 "write_zeroes": true, 00:12:41.471 "zcopy": true, 00:12:41.471 "get_zone_info": false, 00:12:41.471 "zone_management": false, 00:12:41.471 "zone_append": false, 00:12:41.471 "compare": false, 00:12:41.471 "compare_and_write": false, 00:12:41.471 "abort": true, 00:12:41.471 "seek_hole": false, 00:12:41.471 "seek_data": false, 00:12:41.471 "copy": true, 00:12:41.471 "nvme_iov_md": false 00:12:41.471 }, 00:12:41.471 "memory_domains": [ 00:12:41.471 { 00:12:41.471 "dma_device_id": "system", 00:12:41.471 "dma_device_type": 1 00:12:41.471 }, 00:12:41.471 { 00:12:41.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.471 "dma_device_type": 2 00:12:41.471 } 00:12:41.471 ], 00:12:41.471 "driver_specific": {} 00:12:41.471 } 00:12:41.471 ] 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.471 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.471 "name": "Existed_Raid", 00:12:41.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.471 "strip_size_kb": 64, 00:12:41.471 "state": "configuring", 00:12:41.471 "raid_level": "concat", 00:12:41.471 "superblock": false, 00:12:41.471 "num_base_bdevs": 3, 00:12:41.471 "num_base_bdevs_discovered": 2, 00:12:41.471 "num_base_bdevs_operational": 3, 00:12:41.471 "base_bdevs_list": [ 00:12:41.471 { 00:12:41.471 "name": "BaseBdev1", 00:12:41.471 "uuid": "ae20e6ed-7be5-4ca7-a4c7-df05f59fc7da", 00:12:41.471 "is_configured": true, 00:12:41.471 "data_offset": 0, 00:12:41.471 "data_size": 65536 00:12:41.471 }, 00:12:41.471 { 00:12:41.471 "name": "BaseBdev2", 00:12:41.471 "uuid": "630ad9b8-4309-4e43-b008-d1ae79191043", 00:12:41.472 "is_configured": true, 00:12:41.472 "data_offset": 0, 00:12:41.472 "data_size": 65536 00:12:41.472 }, 00:12:41.472 { 00:12:41.472 "name": "BaseBdev3", 00:12:41.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.472 "is_configured": false, 00:12:41.472 "data_offset": 0, 00:12:41.472 "data_size": 0 00:12:41.472 } 00:12:41.472 ] 00:12:41.472 }' 00:12:41.472 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.472 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.039 [2024-11-20 11:24:49.718768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.039 [2024-11-20 11:24:49.719018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:42.039 [2024-11-20 11:24:49.719054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:42.039 [2024-11-20 11:24:49.719411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:42.039 [2024-11-20 11:24:49.719677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:42.039 [2024-11-20 11:24:49.719696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:42.039 [2024-11-20 11:24:49.720004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.039 BaseBdev3 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.039 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.039 [ 00:12:42.039 { 00:12:42.039 "name": "BaseBdev3", 00:12:42.039 "aliases": [ 00:12:42.039 "875f3be2-2c43-493c-9239-a60a6a6fe3cc" 00:12:42.039 ], 00:12:42.039 "product_name": "Malloc disk", 00:12:42.039 "block_size": 512, 00:12:42.039 "num_blocks": 65536, 00:12:42.039 "uuid": "875f3be2-2c43-493c-9239-a60a6a6fe3cc", 00:12:42.039 "assigned_rate_limits": { 00:12:42.039 "rw_ios_per_sec": 0, 00:12:42.039 "rw_mbytes_per_sec": 0, 00:12:42.039 "r_mbytes_per_sec": 0, 00:12:42.039 "w_mbytes_per_sec": 0 00:12:42.039 }, 00:12:42.039 "claimed": true, 00:12:42.039 "claim_type": "exclusive_write", 00:12:42.039 "zoned": false, 00:12:42.039 "supported_io_types": { 00:12:42.039 "read": true, 00:12:42.039 "write": true, 00:12:42.039 "unmap": true, 00:12:42.039 "flush": true, 00:12:42.039 "reset": true, 00:12:42.039 "nvme_admin": false, 00:12:42.039 "nvme_io": false, 00:12:42.039 "nvme_io_md": false, 00:12:42.039 "write_zeroes": true, 00:12:42.039 "zcopy": true, 00:12:42.039 "get_zone_info": false, 00:12:42.039 "zone_management": false, 00:12:42.039 "zone_append": false, 00:12:42.039 "compare": false, 00:12:42.039 "compare_and_write": false, 00:12:42.039 "abort": true, 00:12:42.039 "seek_hole": false, 00:12:42.039 "seek_data": false, 00:12:42.039 "copy": true, 00:12:42.039 "nvme_iov_md": false 00:12:42.039 }, 00:12:42.039 "memory_domains": [ 00:12:42.039 { 00:12:42.039 "dma_device_id": "system", 00:12:42.039 "dma_device_type": 1 00:12:42.039 }, 00:12:42.039 { 00:12:42.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.039 "dma_device_type": 2 00:12:42.039 } 00:12:42.039 ], 00:12:42.039 "driver_specific": {} 00:12:42.039 } 00:12:42.039 ] 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.040 "name": "Existed_Raid", 00:12:42.040 "uuid": "28f77f7a-5ff5-49bf-8b1b-a2f4a11a6263", 00:12:42.040 "strip_size_kb": 64, 00:12:42.040 "state": "online", 00:12:42.040 "raid_level": "concat", 00:12:42.040 "superblock": false, 00:12:42.040 "num_base_bdevs": 3, 00:12:42.040 "num_base_bdevs_discovered": 3, 00:12:42.040 "num_base_bdevs_operational": 3, 00:12:42.040 "base_bdevs_list": [ 00:12:42.040 { 00:12:42.040 "name": "BaseBdev1", 00:12:42.040 "uuid": "ae20e6ed-7be5-4ca7-a4c7-df05f59fc7da", 00:12:42.040 "is_configured": true, 00:12:42.040 "data_offset": 0, 00:12:42.040 "data_size": 65536 00:12:42.040 }, 00:12:42.040 { 00:12:42.040 "name": "BaseBdev2", 00:12:42.040 "uuid": "630ad9b8-4309-4e43-b008-d1ae79191043", 00:12:42.040 "is_configured": true, 00:12:42.040 "data_offset": 0, 00:12:42.040 "data_size": 65536 00:12:42.040 }, 00:12:42.040 { 00:12:42.040 "name": "BaseBdev3", 00:12:42.040 "uuid": "875f3be2-2c43-493c-9239-a60a6a6fe3cc", 00:12:42.040 "is_configured": true, 00:12:42.040 "data_offset": 0, 00:12:42.040 "data_size": 65536 00:12:42.040 } 00:12:42.040 ] 00:12:42.040 }' 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.040 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.606 [2024-11-20 11:24:50.287366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.606 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.606 "name": "Existed_Raid", 00:12:42.606 "aliases": [ 00:12:42.606 "28f77f7a-5ff5-49bf-8b1b-a2f4a11a6263" 00:12:42.606 ], 00:12:42.606 "product_name": "Raid Volume", 00:12:42.606 "block_size": 512, 00:12:42.606 "num_blocks": 196608, 00:12:42.606 "uuid": "28f77f7a-5ff5-49bf-8b1b-a2f4a11a6263", 00:12:42.606 "assigned_rate_limits": { 00:12:42.606 "rw_ios_per_sec": 0, 00:12:42.606 "rw_mbytes_per_sec": 0, 00:12:42.606 "r_mbytes_per_sec": 0, 00:12:42.606 "w_mbytes_per_sec": 0 00:12:42.606 }, 00:12:42.606 "claimed": false, 00:12:42.607 "zoned": false, 00:12:42.607 "supported_io_types": { 00:12:42.607 "read": true, 00:12:42.607 "write": true, 00:12:42.607 "unmap": true, 00:12:42.607 "flush": true, 00:12:42.607 "reset": true, 00:12:42.607 "nvme_admin": false, 00:12:42.607 "nvme_io": false, 00:12:42.607 "nvme_io_md": false, 00:12:42.607 "write_zeroes": true, 00:12:42.607 "zcopy": false, 00:12:42.607 "get_zone_info": false, 00:12:42.607 "zone_management": false, 00:12:42.607 "zone_append": false, 00:12:42.607 "compare": false, 00:12:42.607 "compare_and_write": false, 00:12:42.607 "abort": false, 00:12:42.607 "seek_hole": false, 00:12:42.607 "seek_data": false, 00:12:42.607 "copy": false, 00:12:42.607 "nvme_iov_md": false 00:12:42.607 }, 00:12:42.607 "memory_domains": [ 00:12:42.607 { 00:12:42.607 "dma_device_id": "system", 00:12:42.607 "dma_device_type": 1 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.607 "dma_device_type": 2 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "dma_device_id": "system", 00:12:42.607 "dma_device_type": 1 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.607 "dma_device_type": 2 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "dma_device_id": "system", 00:12:42.607 "dma_device_type": 1 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.607 "dma_device_type": 2 00:12:42.607 } 00:12:42.607 ], 00:12:42.607 "driver_specific": { 00:12:42.607 "raid": { 00:12:42.607 "uuid": "28f77f7a-5ff5-49bf-8b1b-a2f4a11a6263", 00:12:42.607 "strip_size_kb": 64, 00:12:42.607 "state": "online", 00:12:42.607 "raid_level": "concat", 00:12:42.607 "superblock": false, 00:12:42.607 "num_base_bdevs": 3, 00:12:42.607 "num_base_bdevs_discovered": 3, 00:12:42.607 "num_base_bdevs_operational": 3, 00:12:42.607 "base_bdevs_list": [ 00:12:42.607 { 00:12:42.607 "name": "BaseBdev1", 00:12:42.607 "uuid": "ae20e6ed-7be5-4ca7-a4c7-df05f59fc7da", 00:12:42.607 "is_configured": true, 00:12:42.607 "data_offset": 0, 00:12:42.607 "data_size": 65536 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "name": "BaseBdev2", 00:12:42.607 "uuid": "630ad9b8-4309-4e43-b008-d1ae79191043", 00:12:42.607 "is_configured": true, 00:12:42.607 "data_offset": 0, 00:12:42.607 "data_size": 65536 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "name": "BaseBdev3", 00:12:42.607 "uuid": "875f3be2-2c43-493c-9239-a60a6a6fe3cc", 00:12:42.607 "is_configured": true, 00:12:42.607 "data_offset": 0, 00:12:42.607 "data_size": 65536 00:12:42.607 } 00:12:42.607 ] 00:12:42.607 } 00:12:42.607 } 00:12:42.607 }' 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:42.607 BaseBdev2 00:12:42.607 BaseBdev3' 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.607 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.866 [2024-11-20 11:24:50.607129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.866 [2024-11-20 11:24:50.607176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.866 [2024-11-20 11:24:50.607255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.866 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.124 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.124 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.124 "name": "Existed_Raid", 00:12:43.124 "uuid": "28f77f7a-5ff5-49bf-8b1b-a2f4a11a6263", 00:12:43.125 "strip_size_kb": 64, 00:12:43.125 "state": "offline", 00:12:43.125 "raid_level": "concat", 00:12:43.125 "superblock": false, 00:12:43.125 "num_base_bdevs": 3, 00:12:43.125 "num_base_bdevs_discovered": 2, 00:12:43.125 "num_base_bdevs_operational": 2, 00:12:43.125 "base_bdevs_list": [ 00:12:43.125 { 00:12:43.125 "name": null, 00:12:43.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.125 "is_configured": false, 00:12:43.125 "data_offset": 0, 00:12:43.125 "data_size": 65536 00:12:43.125 }, 00:12:43.125 { 00:12:43.125 "name": "BaseBdev2", 00:12:43.125 "uuid": "630ad9b8-4309-4e43-b008-d1ae79191043", 00:12:43.125 "is_configured": true, 00:12:43.125 "data_offset": 0, 00:12:43.125 "data_size": 65536 00:12:43.125 }, 00:12:43.125 { 00:12:43.125 "name": "BaseBdev3", 00:12:43.125 "uuid": "875f3be2-2c43-493c-9239-a60a6a6fe3cc", 00:12:43.125 "is_configured": true, 00:12:43.125 "data_offset": 0, 00:12:43.125 "data_size": 65536 00:12:43.125 } 00:12:43.125 ] 00:12:43.125 }' 00:12:43.125 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.125 11:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.384 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:43.384 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.384 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.384 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.384 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.384 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.384 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 [2024-11-20 11:24:51.253548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.643 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 [2024-11-20 11:24:51.402623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.643 [2024-11-20 11:24:51.402721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.902 BaseBdev2 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.902 [ 00:12:43.902 { 00:12:43.902 "name": "BaseBdev2", 00:12:43.902 "aliases": [ 00:12:43.902 "87f31f02-3cae-4db8-b7d5-02e87485e7bd" 00:12:43.902 ], 00:12:43.902 "product_name": "Malloc disk", 00:12:43.902 "block_size": 512, 00:12:43.902 "num_blocks": 65536, 00:12:43.902 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:43.902 "assigned_rate_limits": { 00:12:43.902 "rw_ios_per_sec": 0, 00:12:43.902 "rw_mbytes_per_sec": 0, 00:12:43.902 "r_mbytes_per_sec": 0, 00:12:43.902 "w_mbytes_per_sec": 0 00:12:43.902 }, 00:12:43.902 "claimed": false, 00:12:43.902 "zoned": false, 00:12:43.902 "supported_io_types": { 00:12:43.902 "read": true, 00:12:43.902 "write": true, 00:12:43.902 "unmap": true, 00:12:43.902 "flush": true, 00:12:43.902 "reset": true, 00:12:43.902 "nvme_admin": false, 00:12:43.902 "nvme_io": false, 00:12:43.902 "nvme_io_md": false, 00:12:43.902 "write_zeroes": true, 00:12:43.902 "zcopy": true, 00:12:43.902 "get_zone_info": false, 00:12:43.902 "zone_management": false, 00:12:43.902 "zone_append": false, 00:12:43.902 "compare": false, 00:12:43.902 "compare_and_write": false, 00:12:43.902 "abort": true, 00:12:43.902 "seek_hole": false, 00:12:43.902 "seek_data": false, 00:12:43.902 "copy": true, 00:12:43.902 "nvme_iov_md": false 00:12:43.902 }, 00:12:43.902 "memory_domains": [ 00:12:43.902 { 00:12:43.902 "dma_device_id": "system", 00:12:43.902 "dma_device_type": 1 00:12:43.902 }, 00:12:43.902 { 00:12:43.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.902 "dma_device_type": 2 00:12:43.902 } 00:12:43.902 ], 00:12:43.902 "driver_specific": {} 00:12:43.902 } 00:12:43.902 ] 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.902 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.903 BaseBdev3 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.903 [ 00:12:43.903 { 00:12:43.903 "name": "BaseBdev3", 00:12:43.903 "aliases": [ 00:12:43.903 "2db0d713-b848-4424-a739-3fbb2fed5a9a" 00:12:43.903 ], 00:12:43.903 "product_name": "Malloc disk", 00:12:43.903 "block_size": 512, 00:12:43.903 "num_blocks": 65536, 00:12:43.903 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:43.903 "assigned_rate_limits": { 00:12:43.903 "rw_ios_per_sec": 0, 00:12:43.903 "rw_mbytes_per_sec": 0, 00:12:43.903 "r_mbytes_per_sec": 0, 00:12:43.903 "w_mbytes_per_sec": 0 00:12:43.903 }, 00:12:43.903 "claimed": false, 00:12:43.903 "zoned": false, 00:12:43.903 "supported_io_types": { 00:12:43.903 "read": true, 00:12:43.903 "write": true, 00:12:43.903 "unmap": true, 00:12:43.903 "flush": true, 00:12:43.903 "reset": true, 00:12:43.903 "nvme_admin": false, 00:12:43.903 "nvme_io": false, 00:12:43.903 "nvme_io_md": false, 00:12:43.903 "write_zeroes": true, 00:12:43.903 "zcopy": true, 00:12:43.903 "get_zone_info": false, 00:12:43.903 "zone_management": false, 00:12:43.903 "zone_append": false, 00:12:43.903 "compare": false, 00:12:43.903 "compare_and_write": false, 00:12:43.903 "abort": true, 00:12:43.903 "seek_hole": false, 00:12:43.903 "seek_data": false, 00:12:43.903 "copy": true, 00:12:43.903 "nvme_iov_md": false 00:12:43.903 }, 00:12:43.903 "memory_domains": [ 00:12:43.903 { 00:12:43.903 "dma_device_id": "system", 00:12:43.903 "dma_device_type": 1 00:12:43.903 }, 00:12:43.903 { 00:12:43.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.903 "dma_device_type": 2 00:12:43.903 } 00:12:43.903 ], 00:12:43.903 "driver_specific": {} 00:12:43.903 } 00:12:43.903 ] 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.903 [2024-11-20 11:24:51.730913] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.903 [2024-11-20 11:24:51.731113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.903 [2024-11-20 11:24:51.731262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.903 [2024-11-20 11:24:51.733711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.903 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.162 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.162 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.162 "name": "Existed_Raid", 00:12:44.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.162 "strip_size_kb": 64, 00:12:44.162 "state": "configuring", 00:12:44.162 "raid_level": "concat", 00:12:44.162 "superblock": false, 00:12:44.162 "num_base_bdevs": 3, 00:12:44.162 "num_base_bdevs_discovered": 2, 00:12:44.162 "num_base_bdevs_operational": 3, 00:12:44.162 "base_bdevs_list": [ 00:12:44.162 { 00:12:44.162 "name": "BaseBdev1", 00:12:44.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.162 "is_configured": false, 00:12:44.162 "data_offset": 0, 00:12:44.162 "data_size": 0 00:12:44.162 }, 00:12:44.162 { 00:12:44.162 "name": "BaseBdev2", 00:12:44.162 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:44.162 "is_configured": true, 00:12:44.162 "data_offset": 0, 00:12:44.162 "data_size": 65536 00:12:44.162 }, 00:12:44.162 { 00:12:44.162 "name": "BaseBdev3", 00:12:44.162 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:44.162 "is_configured": true, 00:12:44.162 "data_offset": 0, 00:12:44.162 "data_size": 65536 00:12:44.162 } 00:12:44.162 ] 00:12:44.162 }' 00:12:44.162 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.162 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.544 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:44.544 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.544 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.545 [2024-11-20 11:24:52.275076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.545 "name": "Existed_Raid", 00:12:44.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.545 "strip_size_kb": 64, 00:12:44.545 "state": "configuring", 00:12:44.545 "raid_level": "concat", 00:12:44.545 "superblock": false, 00:12:44.545 "num_base_bdevs": 3, 00:12:44.545 "num_base_bdevs_discovered": 1, 00:12:44.545 "num_base_bdevs_operational": 3, 00:12:44.545 "base_bdevs_list": [ 00:12:44.545 { 00:12:44.545 "name": "BaseBdev1", 00:12:44.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.545 "is_configured": false, 00:12:44.545 "data_offset": 0, 00:12:44.545 "data_size": 0 00:12:44.545 }, 00:12:44.545 { 00:12:44.545 "name": null, 00:12:44.545 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:44.545 "is_configured": false, 00:12:44.545 "data_offset": 0, 00:12:44.545 "data_size": 65536 00:12:44.545 }, 00:12:44.545 { 00:12:44.545 "name": "BaseBdev3", 00:12:44.545 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:44.545 "is_configured": true, 00:12:44.545 "data_offset": 0, 00:12:44.545 "data_size": 65536 00:12:44.545 } 00:12:44.545 ] 00:12:44.545 }' 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.545 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.115 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.115 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.115 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.115 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:45.115 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.115 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 [2024-11-20 11:24:52.881315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.116 BaseBdev1 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 [ 00:12:45.116 { 00:12:45.116 "name": "BaseBdev1", 00:12:45.116 "aliases": [ 00:12:45.116 "3b440f24-2f20-4c45-bef3-287492895c6f" 00:12:45.116 ], 00:12:45.116 "product_name": "Malloc disk", 00:12:45.116 "block_size": 512, 00:12:45.116 "num_blocks": 65536, 00:12:45.116 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:45.116 "assigned_rate_limits": { 00:12:45.116 "rw_ios_per_sec": 0, 00:12:45.116 "rw_mbytes_per_sec": 0, 00:12:45.116 "r_mbytes_per_sec": 0, 00:12:45.116 "w_mbytes_per_sec": 0 00:12:45.116 }, 00:12:45.116 "claimed": true, 00:12:45.116 "claim_type": "exclusive_write", 00:12:45.116 "zoned": false, 00:12:45.116 "supported_io_types": { 00:12:45.116 "read": true, 00:12:45.116 "write": true, 00:12:45.116 "unmap": true, 00:12:45.116 "flush": true, 00:12:45.116 "reset": true, 00:12:45.116 "nvme_admin": false, 00:12:45.116 "nvme_io": false, 00:12:45.116 "nvme_io_md": false, 00:12:45.116 "write_zeroes": true, 00:12:45.116 "zcopy": true, 00:12:45.116 "get_zone_info": false, 00:12:45.116 "zone_management": false, 00:12:45.116 "zone_append": false, 00:12:45.116 "compare": false, 00:12:45.116 "compare_and_write": false, 00:12:45.116 "abort": true, 00:12:45.116 "seek_hole": false, 00:12:45.116 "seek_data": false, 00:12:45.116 "copy": true, 00:12:45.116 "nvme_iov_md": false 00:12:45.116 }, 00:12:45.116 "memory_domains": [ 00:12:45.116 { 00:12:45.116 "dma_device_id": "system", 00:12:45.116 "dma_device_type": 1 00:12:45.116 }, 00:12:45.116 { 00:12:45.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.116 "dma_device_type": 2 00:12:45.116 } 00:12:45.116 ], 00:12:45.116 "driver_specific": {} 00:12:45.116 } 00:12:45.116 ] 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.375 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.375 "name": "Existed_Raid", 00:12:45.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.375 "strip_size_kb": 64, 00:12:45.375 "state": "configuring", 00:12:45.375 "raid_level": "concat", 00:12:45.375 "superblock": false, 00:12:45.375 "num_base_bdevs": 3, 00:12:45.375 "num_base_bdevs_discovered": 2, 00:12:45.375 "num_base_bdevs_operational": 3, 00:12:45.375 "base_bdevs_list": [ 00:12:45.375 { 00:12:45.375 "name": "BaseBdev1", 00:12:45.375 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:45.375 "is_configured": true, 00:12:45.375 "data_offset": 0, 00:12:45.375 "data_size": 65536 00:12:45.375 }, 00:12:45.375 { 00:12:45.375 "name": null, 00:12:45.375 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:45.375 "is_configured": false, 00:12:45.375 "data_offset": 0, 00:12:45.375 "data_size": 65536 00:12:45.375 }, 00:12:45.375 { 00:12:45.375 "name": "BaseBdev3", 00:12:45.375 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:45.375 "is_configured": true, 00:12:45.375 "data_offset": 0, 00:12:45.375 "data_size": 65536 00:12:45.375 } 00:12:45.375 ] 00:12:45.375 }' 00:12:45.375 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.375 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.636 [2024-11-20 11:24:53.461493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.636 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.895 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.895 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.895 "name": "Existed_Raid", 00:12:45.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.895 "strip_size_kb": 64, 00:12:45.895 "state": "configuring", 00:12:45.895 "raid_level": "concat", 00:12:45.895 "superblock": false, 00:12:45.895 "num_base_bdevs": 3, 00:12:45.895 "num_base_bdevs_discovered": 1, 00:12:45.895 "num_base_bdevs_operational": 3, 00:12:45.895 "base_bdevs_list": [ 00:12:45.895 { 00:12:45.895 "name": "BaseBdev1", 00:12:45.895 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:45.895 "is_configured": true, 00:12:45.895 "data_offset": 0, 00:12:45.895 "data_size": 65536 00:12:45.895 }, 00:12:45.895 { 00:12:45.895 "name": null, 00:12:45.895 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:45.895 "is_configured": false, 00:12:45.895 "data_offset": 0, 00:12:45.895 "data_size": 65536 00:12:45.895 }, 00:12:45.895 { 00:12:45.895 "name": null, 00:12:45.895 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:45.895 "is_configured": false, 00:12:45.895 "data_offset": 0, 00:12:45.895 "data_size": 65536 00:12:45.895 } 00:12:45.895 ] 00:12:45.895 }' 00:12:45.895 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.895 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.154 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.154 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:46.154 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.154 11:24:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 [2024-11-20 11:24:54.041719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.413 "name": "Existed_Raid", 00:12:46.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.413 "strip_size_kb": 64, 00:12:46.413 "state": "configuring", 00:12:46.413 "raid_level": "concat", 00:12:46.413 "superblock": false, 00:12:46.413 "num_base_bdevs": 3, 00:12:46.413 "num_base_bdevs_discovered": 2, 00:12:46.413 "num_base_bdevs_operational": 3, 00:12:46.413 "base_bdevs_list": [ 00:12:46.413 { 00:12:46.413 "name": "BaseBdev1", 00:12:46.413 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:46.413 "is_configured": true, 00:12:46.413 "data_offset": 0, 00:12:46.413 "data_size": 65536 00:12:46.413 }, 00:12:46.413 { 00:12:46.413 "name": null, 00:12:46.413 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:46.413 "is_configured": false, 00:12:46.413 "data_offset": 0, 00:12:46.413 "data_size": 65536 00:12:46.413 }, 00:12:46.413 { 00:12:46.413 "name": "BaseBdev3", 00:12:46.413 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:46.413 "is_configured": true, 00:12:46.413 "data_offset": 0, 00:12:46.413 "data_size": 65536 00:12:46.413 } 00:12:46.413 ] 00:12:46.413 }' 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.413 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.982 [2024-11-20 11:24:54.601884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.982 "name": "Existed_Raid", 00:12:46.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.982 "strip_size_kb": 64, 00:12:46.982 "state": "configuring", 00:12:46.982 "raid_level": "concat", 00:12:46.982 "superblock": false, 00:12:46.982 "num_base_bdevs": 3, 00:12:46.982 "num_base_bdevs_discovered": 1, 00:12:46.982 "num_base_bdevs_operational": 3, 00:12:46.982 "base_bdevs_list": [ 00:12:46.982 { 00:12:46.982 "name": null, 00:12:46.982 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:46.982 "is_configured": false, 00:12:46.982 "data_offset": 0, 00:12:46.982 "data_size": 65536 00:12:46.982 }, 00:12:46.982 { 00:12:46.982 "name": null, 00:12:46.982 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:46.982 "is_configured": false, 00:12:46.982 "data_offset": 0, 00:12:46.982 "data_size": 65536 00:12:46.982 }, 00:12:46.982 { 00:12:46.982 "name": "BaseBdev3", 00:12:46.982 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:46.982 "is_configured": true, 00:12:46.982 "data_offset": 0, 00:12:46.982 "data_size": 65536 00:12:46.982 } 00:12:46.982 ] 00:12:46.982 }' 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.982 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.550 [2024-11-20 11:24:55.239578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.550 "name": "Existed_Raid", 00:12:47.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.550 "strip_size_kb": 64, 00:12:47.550 "state": "configuring", 00:12:47.550 "raid_level": "concat", 00:12:47.550 "superblock": false, 00:12:47.550 "num_base_bdevs": 3, 00:12:47.550 "num_base_bdevs_discovered": 2, 00:12:47.550 "num_base_bdevs_operational": 3, 00:12:47.550 "base_bdevs_list": [ 00:12:47.550 { 00:12:47.550 "name": null, 00:12:47.550 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:47.550 "is_configured": false, 00:12:47.550 "data_offset": 0, 00:12:47.550 "data_size": 65536 00:12:47.550 }, 00:12:47.550 { 00:12:47.550 "name": "BaseBdev2", 00:12:47.550 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:47.550 "is_configured": true, 00:12:47.550 "data_offset": 0, 00:12:47.550 "data_size": 65536 00:12:47.550 }, 00:12:47.550 { 00:12:47.550 "name": "BaseBdev3", 00:12:47.550 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:47.550 "is_configured": true, 00:12:47.550 "data_offset": 0, 00:12:47.550 "data_size": 65536 00:12:47.550 } 00:12:47.550 ] 00:12:47.550 }' 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.550 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b440f24-2f20-4c45-bef3-287492895c6f 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.119 [2024-11-20 11:24:55.884030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:48.119 [2024-11-20 11:24:55.884107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:48.119 [2024-11-20 11:24:55.884132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:48.119 [2024-11-20 11:24:55.884453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:48.119 [2024-11-20 11:24:55.884680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:48.119 [2024-11-20 11:24:55.884698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:48.119 [2024-11-20 11:24:55.885009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.119 NewBaseBdev 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.119 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.120 [ 00:12:48.120 { 00:12:48.120 "name": "NewBaseBdev", 00:12:48.120 "aliases": [ 00:12:48.120 "3b440f24-2f20-4c45-bef3-287492895c6f" 00:12:48.120 ], 00:12:48.120 "product_name": "Malloc disk", 00:12:48.120 "block_size": 512, 00:12:48.120 "num_blocks": 65536, 00:12:48.120 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:48.120 "assigned_rate_limits": { 00:12:48.120 "rw_ios_per_sec": 0, 00:12:48.120 "rw_mbytes_per_sec": 0, 00:12:48.120 "r_mbytes_per_sec": 0, 00:12:48.120 "w_mbytes_per_sec": 0 00:12:48.120 }, 00:12:48.120 "claimed": true, 00:12:48.120 "claim_type": "exclusive_write", 00:12:48.120 "zoned": false, 00:12:48.120 "supported_io_types": { 00:12:48.120 "read": true, 00:12:48.120 "write": true, 00:12:48.120 "unmap": true, 00:12:48.120 "flush": true, 00:12:48.120 "reset": true, 00:12:48.120 "nvme_admin": false, 00:12:48.120 "nvme_io": false, 00:12:48.120 "nvme_io_md": false, 00:12:48.120 "write_zeroes": true, 00:12:48.120 "zcopy": true, 00:12:48.120 "get_zone_info": false, 00:12:48.120 "zone_management": false, 00:12:48.120 "zone_append": false, 00:12:48.120 "compare": false, 00:12:48.120 "compare_and_write": false, 00:12:48.120 "abort": true, 00:12:48.120 "seek_hole": false, 00:12:48.120 "seek_data": false, 00:12:48.120 "copy": true, 00:12:48.120 "nvme_iov_md": false 00:12:48.120 }, 00:12:48.120 "memory_domains": [ 00:12:48.120 { 00:12:48.120 "dma_device_id": "system", 00:12:48.120 "dma_device_type": 1 00:12:48.120 }, 00:12:48.120 { 00:12:48.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.120 "dma_device_type": 2 00:12:48.120 } 00:12:48.120 ], 00:12:48.120 "driver_specific": {} 00:12:48.120 } 00:12:48.120 ] 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.120 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.379 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.379 "name": "Existed_Raid", 00:12:48.379 "uuid": "74bc561b-d787-4fb0-a69c-547c9df46bb2", 00:12:48.379 "strip_size_kb": 64, 00:12:48.379 "state": "online", 00:12:48.379 "raid_level": "concat", 00:12:48.379 "superblock": false, 00:12:48.379 "num_base_bdevs": 3, 00:12:48.379 "num_base_bdevs_discovered": 3, 00:12:48.379 "num_base_bdevs_operational": 3, 00:12:48.379 "base_bdevs_list": [ 00:12:48.379 { 00:12:48.379 "name": "NewBaseBdev", 00:12:48.379 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:48.379 "is_configured": true, 00:12:48.379 "data_offset": 0, 00:12:48.379 "data_size": 65536 00:12:48.379 }, 00:12:48.379 { 00:12:48.379 "name": "BaseBdev2", 00:12:48.379 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:48.379 "is_configured": true, 00:12:48.379 "data_offset": 0, 00:12:48.379 "data_size": 65536 00:12:48.379 }, 00:12:48.379 { 00:12:48.379 "name": "BaseBdev3", 00:12:48.379 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:48.379 "is_configured": true, 00:12:48.379 "data_offset": 0, 00:12:48.379 "data_size": 65536 00:12:48.379 } 00:12:48.379 ] 00:12:48.379 }' 00:12:48.379 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.379 11:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.638 [2024-11-20 11:24:56.440714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.638 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.638 "name": "Existed_Raid", 00:12:48.638 "aliases": [ 00:12:48.638 "74bc561b-d787-4fb0-a69c-547c9df46bb2" 00:12:48.638 ], 00:12:48.638 "product_name": "Raid Volume", 00:12:48.638 "block_size": 512, 00:12:48.638 "num_blocks": 196608, 00:12:48.638 "uuid": "74bc561b-d787-4fb0-a69c-547c9df46bb2", 00:12:48.638 "assigned_rate_limits": { 00:12:48.638 "rw_ios_per_sec": 0, 00:12:48.638 "rw_mbytes_per_sec": 0, 00:12:48.638 "r_mbytes_per_sec": 0, 00:12:48.638 "w_mbytes_per_sec": 0 00:12:48.638 }, 00:12:48.638 "claimed": false, 00:12:48.638 "zoned": false, 00:12:48.638 "supported_io_types": { 00:12:48.638 "read": true, 00:12:48.638 "write": true, 00:12:48.638 "unmap": true, 00:12:48.638 "flush": true, 00:12:48.638 "reset": true, 00:12:48.638 "nvme_admin": false, 00:12:48.638 "nvme_io": false, 00:12:48.638 "nvme_io_md": false, 00:12:48.638 "write_zeroes": true, 00:12:48.638 "zcopy": false, 00:12:48.638 "get_zone_info": false, 00:12:48.638 "zone_management": false, 00:12:48.638 "zone_append": false, 00:12:48.638 "compare": false, 00:12:48.638 "compare_and_write": false, 00:12:48.638 "abort": false, 00:12:48.638 "seek_hole": false, 00:12:48.638 "seek_data": false, 00:12:48.638 "copy": false, 00:12:48.638 "nvme_iov_md": false 00:12:48.638 }, 00:12:48.638 "memory_domains": [ 00:12:48.638 { 00:12:48.638 "dma_device_id": "system", 00:12:48.638 "dma_device_type": 1 00:12:48.638 }, 00:12:48.638 { 00:12:48.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.638 "dma_device_type": 2 00:12:48.638 }, 00:12:48.638 { 00:12:48.638 "dma_device_id": "system", 00:12:48.638 "dma_device_type": 1 00:12:48.638 }, 00:12:48.638 { 00:12:48.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.638 "dma_device_type": 2 00:12:48.638 }, 00:12:48.638 { 00:12:48.638 "dma_device_id": "system", 00:12:48.638 "dma_device_type": 1 00:12:48.638 }, 00:12:48.638 { 00:12:48.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.638 "dma_device_type": 2 00:12:48.638 } 00:12:48.638 ], 00:12:48.638 "driver_specific": { 00:12:48.638 "raid": { 00:12:48.638 "uuid": "74bc561b-d787-4fb0-a69c-547c9df46bb2", 00:12:48.638 "strip_size_kb": 64, 00:12:48.638 "state": "online", 00:12:48.638 "raid_level": "concat", 00:12:48.638 "superblock": false, 00:12:48.638 "num_base_bdevs": 3, 00:12:48.638 "num_base_bdevs_discovered": 3, 00:12:48.638 "num_base_bdevs_operational": 3, 00:12:48.638 "base_bdevs_list": [ 00:12:48.638 { 00:12:48.638 "name": "NewBaseBdev", 00:12:48.638 "uuid": "3b440f24-2f20-4c45-bef3-287492895c6f", 00:12:48.638 "is_configured": true, 00:12:48.638 "data_offset": 0, 00:12:48.638 "data_size": 65536 00:12:48.638 }, 00:12:48.638 { 00:12:48.638 "name": "BaseBdev2", 00:12:48.638 "uuid": "87f31f02-3cae-4db8-b7d5-02e87485e7bd", 00:12:48.638 "is_configured": true, 00:12:48.638 "data_offset": 0, 00:12:48.638 "data_size": 65536 00:12:48.638 }, 00:12:48.638 { 00:12:48.638 "name": "BaseBdev3", 00:12:48.638 "uuid": "2db0d713-b848-4424-a739-3fbb2fed5a9a", 00:12:48.638 "is_configured": true, 00:12:48.638 "data_offset": 0, 00:12:48.638 "data_size": 65536 00:12:48.638 } 00:12:48.638 ] 00:12:48.638 } 00:12:48.638 } 00:12:48.638 }' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:48.897 BaseBdev2 00:12:48.897 BaseBdev3' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.897 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.897 [2024-11-20 11:24:56.740399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.897 [2024-11-20 11:24:56.740433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.897 [2024-11-20 11:24:56.740539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.897 [2024-11-20 11:24:56.740623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.897 [2024-11-20 11:24:56.740643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65556 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65556 ']' 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65556 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65556 00:12:49.156 killing process with pid 65556 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65556' 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65556 00:12:49.156 [2024-11-20 11:24:56.779528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.156 11:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65556 00:12:49.415 [2024-11-20 11:24:57.048980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:50.351 00:12:50.351 real 0m11.712s 00:12:50.351 user 0m19.392s 00:12:50.351 sys 0m1.599s 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.351 ************************************ 00:12:50.351 END TEST raid_state_function_test 00:12:50.351 ************************************ 00:12:50.351 11:24:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:50.351 11:24:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:50.351 11:24:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.351 11:24:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.351 ************************************ 00:12:50.351 START TEST raid_state_function_test_sb 00:12:50.351 ************************************ 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.351 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66194 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66194' 00:12:50.352 Process raid pid: 66194 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66194 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66194 ']' 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.352 11:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.611 [2024-11-20 11:24:58.243927] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:12:50.611 [2024-11-20 11:24:58.244413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.611 [2024-11-20 11:24:58.436790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.869 [2024-11-20 11:24:58.586142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.128 [2024-11-20 11:24:58.789712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.128 [2024-11-20 11:24:58.789775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.388 [2024-11-20 11:24:59.221111] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.388 [2024-11-20 11:24:59.221189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.388 [2024-11-20 11:24:59.221207] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.388 [2024-11-20 11:24:59.221223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.388 [2024-11-20 11:24:59.221233] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.388 [2024-11-20 11:24:59.221247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.388 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.647 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.647 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.647 "name": "Existed_Raid", 00:12:51.647 "uuid": "c1f3f393-39e6-455a-8f72-861fcb8f703e", 00:12:51.647 "strip_size_kb": 64, 00:12:51.647 "state": "configuring", 00:12:51.647 "raid_level": "concat", 00:12:51.647 "superblock": true, 00:12:51.647 "num_base_bdevs": 3, 00:12:51.647 "num_base_bdevs_discovered": 0, 00:12:51.647 "num_base_bdevs_operational": 3, 00:12:51.647 "base_bdevs_list": [ 00:12:51.647 { 00:12:51.647 "name": "BaseBdev1", 00:12:51.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.647 "is_configured": false, 00:12:51.647 "data_offset": 0, 00:12:51.647 "data_size": 0 00:12:51.647 }, 00:12:51.647 { 00:12:51.647 "name": "BaseBdev2", 00:12:51.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.647 "is_configured": false, 00:12:51.647 "data_offset": 0, 00:12:51.647 "data_size": 0 00:12:51.647 }, 00:12:51.647 { 00:12:51.647 "name": "BaseBdev3", 00:12:51.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.647 "is_configured": false, 00:12:51.647 "data_offset": 0, 00:12:51.647 "data_size": 0 00:12:51.647 } 00:12:51.647 ] 00:12:51.647 }' 00:12:51.647 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.647 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.905 [2024-11-20 11:24:59.697179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.905 [2024-11-20 11:24:59.697220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.905 [2024-11-20 11:24:59.705159] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.905 [2024-11-20 11:24:59.705401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.905 [2024-11-20 11:24:59.705434] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.905 [2024-11-20 11:24:59.705450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.905 [2024-11-20 11:24:59.705460] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.905 [2024-11-20 11:24:59.705474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.905 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.164 [2024-11-20 11:24:59.751589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.164 BaseBdev1 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.164 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.164 [ 00:12:52.164 { 00:12:52.164 "name": "BaseBdev1", 00:12:52.164 "aliases": [ 00:12:52.164 "8eee8fa2-c757-48ce-a20a-ffa9dc07cbc6" 00:12:52.164 ], 00:12:52.164 "product_name": "Malloc disk", 00:12:52.164 "block_size": 512, 00:12:52.164 "num_blocks": 65536, 00:12:52.164 "uuid": "8eee8fa2-c757-48ce-a20a-ffa9dc07cbc6", 00:12:52.164 "assigned_rate_limits": { 00:12:52.164 "rw_ios_per_sec": 0, 00:12:52.164 "rw_mbytes_per_sec": 0, 00:12:52.164 "r_mbytes_per_sec": 0, 00:12:52.164 "w_mbytes_per_sec": 0 00:12:52.164 }, 00:12:52.164 "claimed": true, 00:12:52.164 "claim_type": "exclusive_write", 00:12:52.164 "zoned": false, 00:12:52.164 "supported_io_types": { 00:12:52.164 "read": true, 00:12:52.164 "write": true, 00:12:52.164 "unmap": true, 00:12:52.164 "flush": true, 00:12:52.164 "reset": true, 00:12:52.164 "nvme_admin": false, 00:12:52.164 "nvme_io": false, 00:12:52.164 "nvme_io_md": false, 00:12:52.164 "write_zeroes": true, 00:12:52.164 "zcopy": true, 00:12:52.164 "get_zone_info": false, 00:12:52.164 "zone_management": false, 00:12:52.164 "zone_append": false, 00:12:52.164 "compare": false, 00:12:52.164 "compare_and_write": false, 00:12:52.164 "abort": true, 00:12:52.165 "seek_hole": false, 00:12:52.165 "seek_data": false, 00:12:52.165 "copy": true, 00:12:52.165 "nvme_iov_md": false 00:12:52.165 }, 00:12:52.165 "memory_domains": [ 00:12:52.165 { 00:12:52.165 "dma_device_id": "system", 00:12:52.165 "dma_device_type": 1 00:12:52.165 }, 00:12:52.165 { 00:12:52.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.165 "dma_device_type": 2 00:12:52.165 } 00:12:52.165 ], 00:12:52.165 "driver_specific": {} 00:12:52.165 } 00:12:52.165 ] 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.165 "name": "Existed_Raid", 00:12:52.165 "uuid": "c7574a01-288d-4c16-954a-38f9639e8de7", 00:12:52.165 "strip_size_kb": 64, 00:12:52.165 "state": "configuring", 00:12:52.165 "raid_level": "concat", 00:12:52.165 "superblock": true, 00:12:52.165 "num_base_bdevs": 3, 00:12:52.165 "num_base_bdevs_discovered": 1, 00:12:52.165 "num_base_bdevs_operational": 3, 00:12:52.165 "base_bdevs_list": [ 00:12:52.165 { 00:12:52.165 "name": "BaseBdev1", 00:12:52.165 "uuid": "8eee8fa2-c757-48ce-a20a-ffa9dc07cbc6", 00:12:52.165 "is_configured": true, 00:12:52.165 "data_offset": 2048, 00:12:52.165 "data_size": 63488 00:12:52.165 }, 00:12:52.165 { 00:12:52.165 "name": "BaseBdev2", 00:12:52.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.165 "is_configured": false, 00:12:52.165 "data_offset": 0, 00:12:52.165 "data_size": 0 00:12:52.165 }, 00:12:52.165 { 00:12:52.165 "name": "BaseBdev3", 00:12:52.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.165 "is_configured": false, 00:12:52.165 "data_offset": 0, 00:12:52.165 "data_size": 0 00:12:52.165 } 00:12:52.165 ] 00:12:52.165 }' 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.165 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.732 [2024-11-20 11:25:00.287920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.732 [2024-11-20 11:25:00.287983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.732 [2024-11-20 11:25:00.299957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.732 [2024-11-20 11:25:00.302638] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.732 [2024-11-20 11:25:00.302826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.732 [2024-11-20 11:25:00.302952] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.732 [2024-11-20 11:25:00.303012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.732 "name": "Existed_Raid", 00:12:52.732 "uuid": "5f6c8e90-62f1-48ef-b51e-afb6a8717567", 00:12:52.732 "strip_size_kb": 64, 00:12:52.732 "state": "configuring", 00:12:52.732 "raid_level": "concat", 00:12:52.732 "superblock": true, 00:12:52.732 "num_base_bdevs": 3, 00:12:52.732 "num_base_bdevs_discovered": 1, 00:12:52.732 "num_base_bdevs_operational": 3, 00:12:52.732 "base_bdevs_list": [ 00:12:52.732 { 00:12:52.732 "name": "BaseBdev1", 00:12:52.732 "uuid": "8eee8fa2-c757-48ce-a20a-ffa9dc07cbc6", 00:12:52.732 "is_configured": true, 00:12:52.732 "data_offset": 2048, 00:12:52.732 "data_size": 63488 00:12:52.732 }, 00:12:52.732 { 00:12:52.732 "name": "BaseBdev2", 00:12:52.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.732 "is_configured": false, 00:12:52.732 "data_offset": 0, 00:12:52.732 "data_size": 0 00:12:52.732 }, 00:12:52.732 { 00:12:52.732 "name": "BaseBdev3", 00:12:52.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.732 "is_configured": false, 00:12:52.732 "data_offset": 0, 00:12:52.732 "data_size": 0 00:12:52.732 } 00:12:52.732 ] 00:12:52.732 }' 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.732 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.991 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.991 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.991 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.250 [2024-11-20 11:25:00.848501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.250 BaseBdev2 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.250 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.250 [ 00:12:53.250 { 00:12:53.250 "name": "BaseBdev2", 00:12:53.250 "aliases": [ 00:12:53.251 "fadedb5b-d9a2-44c4-a3f4-4cae5fd6571f" 00:12:53.251 ], 00:12:53.251 "product_name": "Malloc disk", 00:12:53.251 "block_size": 512, 00:12:53.251 "num_blocks": 65536, 00:12:53.251 "uuid": "fadedb5b-d9a2-44c4-a3f4-4cae5fd6571f", 00:12:53.251 "assigned_rate_limits": { 00:12:53.251 "rw_ios_per_sec": 0, 00:12:53.251 "rw_mbytes_per_sec": 0, 00:12:53.251 "r_mbytes_per_sec": 0, 00:12:53.251 "w_mbytes_per_sec": 0 00:12:53.251 }, 00:12:53.251 "claimed": true, 00:12:53.251 "claim_type": "exclusive_write", 00:12:53.251 "zoned": false, 00:12:53.251 "supported_io_types": { 00:12:53.251 "read": true, 00:12:53.251 "write": true, 00:12:53.251 "unmap": true, 00:12:53.251 "flush": true, 00:12:53.251 "reset": true, 00:12:53.251 "nvme_admin": false, 00:12:53.251 "nvme_io": false, 00:12:53.251 "nvme_io_md": false, 00:12:53.251 "write_zeroes": true, 00:12:53.251 "zcopy": true, 00:12:53.251 "get_zone_info": false, 00:12:53.251 "zone_management": false, 00:12:53.251 "zone_append": false, 00:12:53.251 "compare": false, 00:12:53.251 "compare_and_write": false, 00:12:53.251 "abort": true, 00:12:53.251 "seek_hole": false, 00:12:53.251 "seek_data": false, 00:12:53.251 "copy": true, 00:12:53.251 "nvme_iov_md": false 00:12:53.251 }, 00:12:53.251 "memory_domains": [ 00:12:53.251 { 00:12:53.251 "dma_device_id": "system", 00:12:53.251 "dma_device_type": 1 00:12:53.251 }, 00:12:53.251 { 00:12:53.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.251 "dma_device_type": 2 00:12:53.251 } 00:12:53.251 ], 00:12:53.251 "driver_specific": {} 00:12:53.251 } 00:12:53.251 ] 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.251 "name": "Existed_Raid", 00:12:53.251 "uuid": "5f6c8e90-62f1-48ef-b51e-afb6a8717567", 00:12:53.251 "strip_size_kb": 64, 00:12:53.251 "state": "configuring", 00:12:53.251 "raid_level": "concat", 00:12:53.251 "superblock": true, 00:12:53.251 "num_base_bdevs": 3, 00:12:53.251 "num_base_bdevs_discovered": 2, 00:12:53.251 "num_base_bdevs_operational": 3, 00:12:53.251 "base_bdevs_list": [ 00:12:53.251 { 00:12:53.251 "name": "BaseBdev1", 00:12:53.251 "uuid": "8eee8fa2-c757-48ce-a20a-ffa9dc07cbc6", 00:12:53.251 "is_configured": true, 00:12:53.251 "data_offset": 2048, 00:12:53.251 "data_size": 63488 00:12:53.251 }, 00:12:53.251 { 00:12:53.251 "name": "BaseBdev2", 00:12:53.251 "uuid": "fadedb5b-d9a2-44c4-a3f4-4cae5fd6571f", 00:12:53.251 "is_configured": true, 00:12:53.251 "data_offset": 2048, 00:12:53.251 "data_size": 63488 00:12:53.251 }, 00:12:53.251 { 00:12:53.251 "name": "BaseBdev3", 00:12:53.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.251 "is_configured": false, 00:12:53.251 "data_offset": 0, 00:12:53.251 "data_size": 0 00:12:53.251 } 00:12:53.251 ] 00:12:53.251 }' 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.251 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.819 [2024-11-20 11:25:01.469354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.819 [2024-11-20 11:25:01.469756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:53.819 [2024-11-20 11:25:01.469799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:53.819 BaseBdev3 00:12:53.819 [2024-11-20 11:25:01.470147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:53.819 [2024-11-20 11:25:01.470355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:53.819 [2024-11-20 11:25:01.470374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:53.819 [2024-11-20 11:25:01.470562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.819 [ 00:12:53.819 { 00:12:53.819 "name": "BaseBdev3", 00:12:53.819 "aliases": [ 00:12:53.819 "a218f10f-2f73-401d-b2e8-54c8af6646b5" 00:12:53.819 ], 00:12:53.819 "product_name": "Malloc disk", 00:12:53.819 "block_size": 512, 00:12:53.819 "num_blocks": 65536, 00:12:53.819 "uuid": "a218f10f-2f73-401d-b2e8-54c8af6646b5", 00:12:53.819 "assigned_rate_limits": { 00:12:53.819 "rw_ios_per_sec": 0, 00:12:53.819 "rw_mbytes_per_sec": 0, 00:12:53.819 "r_mbytes_per_sec": 0, 00:12:53.819 "w_mbytes_per_sec": 0 00:12:53.819 }, 00:12:53.819 "claimed": true, 00:12:53.819 "claim_type": "exclusive_write", 00:12:53.819 "zoned": false, 00:12:53.819 "supported_io_types": { 00:12:53.819 "read": true, 00:12:53.819 "write": true, 00:12:53.819 "unmap": true, 00:12:53.819 "flush": true, 00:12:53.819 "reset": true, 00:12:53.819 "nvme_admin": false, 00:12:53.819 "nvme_io": false, 00:12:53.819 "nvme_io_md": false, 00:12:53.819 "write_zeroes": true, 00:12:53.819 "zcopy": true, 00:12:53.819 "get_zone_info": false, 00:12:53.819 "zone_management": false, 00:12:53.819 "zone_append": false, 00:12:53.819 "compare": false, 00:12:53.819 "compare_and_write": false, 00:12:53.819 "abort": true, 00:12:53.819 "seek_hole": false, 00:12:53.819 "seek_data": false, 00:12:53.819 "copy": true, 00:12:53.819 "nvme_iov_md": false 00:12:53.819 }, 00:12:53.819 "memory_domains": [ 00:12:53.819 { 00:12:53.819 "dma_device_id": "system", 00:12:53.819 "dma_device_type": 1 00:12:53.819 }, 00:12:53.819 { 00:12:53.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.819 "dma_device_type": 2 00:12:53.819 } 00:12:53.819 ], 00:12:53.819 "driver_specific": {} 00:12:53.819 } 00:12:53.819 ] 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.819 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.820 "name": "Existed_Raid", 00:12:53.820 "uuid": "5f6c8e90-62f1-48ef-b51e-afb6a8717567", 00:12:53.820 "strip_size_kb": 64, 00:12:53.820 "state": "online", 00:12:53.820 "raid_level": "concat", 00:12:53.820 "superblock": true, 00:12:53.820 "num_base_bdevs": 3, 00:12:53.820 "num_base_bdevs_discovered": 3, 00:12:53.820 "num_base_bdevs_operational": 3, 00:12:53.820 "base_bdevs_list": [ 00:12:53.820 { 00:12:53.820 "name": "BaseBdev1", 00:12:53.820 "uuid": "8eee8fa2-c757-48ce-a20a-ffa9dc07cbc6", 00:12:53.820 "is_configured": true, 00:12:53.820 "data_offset": 2048, 00:12:53.820 "data_size": 63488 00:12:53.820 }, 00:12:53.820 { 00:12:53.820 "name": "BaseBdev2", 00:12:53.820 "uuid": "fadedb5b-d9a2-44c4-a3f4-4cae5fd6571f", 00:12:53.820 "is_configured": true, 00:12:53.820 "data_offset": 2048, 00:12:53.820 "data_size": 63488 00:12:53.820 }, 00:12:53.820 { 00:12:53.820 "name": "BaseBdev3", 00:12:53.820 "uuid": "a218f10f-2f73-401d-b2e8-54c8af6646b5", 00:12:53.820 "is_configured": true, 00:12:53.820 "data_offset": 2048, 00:12:53.820 "data_size": 63488 00:12:53.820 } 00:12:53.820 ] 00:12:53.820 }' 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.820 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.388 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.389 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.389 [2024-11-20 11:25:01.982027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.389 "name": "Existed_Raid", 00:12:54.389 "aliases": [ 00:12:54.389 "5f6c8e90-62f1-48ef-b51e-afb6a8717567" 00:12:54.389 ], 00:12:54.389 "product_name": "Raid Volume", 00:12:54.389 "block_size": 512, 00:12:54.389 "num_blocks": 190464, 00:12:54.389 "uuid": "5f6c8e90-62f1-48ef-b51e-afb6a8717567", 00:12:54.389 "assigned_rate_limits": { 00:12:54.389 "rw_ios_per_sec": 0, 00:12:54.389 "rw_mbytes_per_sec": 0, 00:12:54.389 "r_mbytes_per_sec": 0, 00:12:54.389 "w_mbytes_per_sec": 0 00:12:54.389 }, 00:12:54.389 "claimed": false, 00:12:54.389 "zoned": false, 00:12:54.389 "supported_io_types": { 00:12:54.389 "read": true, 00:12:54.389 "write": true, 00:12:54.389 "unmap": true, 00:12:54.389 "flush": true, 00:12:54.389 "reset": true, 00:12:54.389 "nvme_admin": false, 00:12:54.389 "nvme_io": false, 00:12:54.389 "nvme_io_md": false, 00:12:54.389 "write_zeroes": true, 00:12:54.389 "zcopy": false, 00:12:54.389 "get_zone_info": false, 00:12:54.389 "zone_management": false, 00:12:54.389 "zone_append": false, 00:12:54.389 "compare": false, 00:12:54.389 "compare_and_write": false, 00:12:54.389 "abort": false, 00:12:54.389 "seek_hole": false, 00:12:54.389 "seek_data": false, 00:12:54.389 "copy": false, 00:12:54.389 "nvme_iov_md": false 00:12:54.389 }, 00:12:54.389 "memory_domains": [ 00:12:54.389 { 00:12:54.389 "dma_device_id": "system", 00:12:54.389 "dma_device_type": 1 00:12:54.389 }, 00:12:54.389 { 00:12:54.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.389 "dma_device_type": 2 00:12:54.389 }, 00:12:54.389 { 00:12:54.389 "dma_device_id": "system", 00:12:54.389 "dma_device_type": 1 00:12:54.389 }, 00:12:54.389 { 00:12:54.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.389 "dma_device_type": 2 00:12:54.389 }, 00:12:54.389 { 00:12:54.389 "dma_device_id": "system", 00:12:54.389 "dma_device_type": 1 00:12:54.389 }, 00:12:54.389 { 00:12:54.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.389 "dma_device_type": 2 00:12:54.389 } 00:12:54.389 ], 00:12:54.389 "driver_specific": { 00:12:54.389 "raid": { 00:12:54.389 "uuid": "5f6c8e90-62f1-48ef-b51e-afb6a8717567", 00:12:54.389 "strip_size_kb": 64, 00:12:54.389 "state": "online", 00:12:54.389 "raid_level": "concat", 00:12:54.389 "superblock": true, 00:12:54.389 "num_base_bdevs": 3, 00:12:54.389 "num_base_bdevs_discovered": 3, 00:12:54.389 "num_base_bdevs_operational": 3, 00:12:54.389 "base_bdevs_list": [ 00:12:54.389 { 00:12:54.389 "name": "BaseBdev1", 00:12:54.389 "uuid": "8eee8fa2-c757-48ce-a20a-ffa9dc07cbc6", 00:12:54.389 "is_configured": true, 00:12:54.389 "data_offset": 2048, 00:12:54.389 "data_size": 63488 00:12:54.389 }, 00:12:54.389 { 00:12:54.389 "name": "BaseBdev2", 00:12:54.389 "uuid": "fadedb5b-d9a2-44c4-a3f4-4cae5fd6571f", 00:12:54.389 "is_configured": true, 00:12:54.389 "data_offset": 2048, 00:12:54.389 "data_size": 63488 00:12:54.389 }, 00:12:54.389 { 00:12:54.389 "name": "BaseBdev3", 00:12:54.389 "uuid": "a218f10f-2f73-401d-b2e8-54c8af6646b5", 00:12:54.389 "is_configured": true, 00:12:54.389 "data_offset": 2048, 00:12:54.389 "data_size": 63488 00:12:54.389 } 00:12:54.389 ] 00:12:54.389 } 00:12:54.389 } 00:12:54.389 }' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:54.389 BaseBdev2 00:12:54.389 BaseBdev3' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.389 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.648 [2024-11-20 11:25:02.273749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.648 [2024-11-20 11:25:02.273793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.648 [2024-11-20 11:25:02.273870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.648 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.648 "name": "Existed_Raid", 00:12:54.648 "uuid": "5f6c8e90-62f1-48ef-b51e-afb6a8717567", 00:12:54.648 "strip_size_kb": 64, 00:12:54.648 "state": "offline", 00:12:54.648 "raid_level": "concat", 00:12:54.648 "superblock": true, 00:12:54.648 "num_base_bdevs": 3, 00:12:54.648 "num_base_bdevs_discovered": 2, 00:12:54.648 "num_base_bdevs_operational": 2, 00:12:54.649 "base_bdevs_list": [ 00:12:54.649 { 00:12:54.649 "name": null, 00:12:54.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.649 "is_configured": false, 00:12:54.649 "data_offset": 0, 00:12:54.649 "data_size": 63488 00:12:54.649 }, 00:12:54.649 { 00:12:54.649 "name": "BaseBdev2", 00:12:54.649 "uuid": "fadedb5b-d9a2-44c4-a3f4-4cae5fd6571f", 00:12:54.649 "is_configured": true, 00:12:54.649 "data_offset": 2048, 00:12:54.649 "data_size": 63488 00:12:54.649 }, 00:12:54.649 { 00:12:54.649 "name": "BaseBdev3", 00:12:54.649 "uuid": "a218f10f-2f73-401d-b2e8-54c8af6646b5", 00:12:54.649 "is_configured": true, 00:12:54.649 "data_offset": 2048, 00:12:54.649 "data_size": 63488 00:12:54.649 } 00:12:54.649 ] 00:12:54.649 }' 00:12:54.649 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.649 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.216 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.217 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:55.217 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.217 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.217 [2024-11-20 11:25:02.919762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.217 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.478 [2024-11-20 11:25:03.067785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.478 [2024-11-20 11:25:03.067995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.478 BaseBdev2 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.478 [ 00:12:55.478 { 00:12:55.478 "name": "BaseBdev2", 00:12:55.478 "aliases": [ 00:12:55.478 "416b2381-8a81-4522-807b-537c84a4dd7c" 00:12:55.478 ], 00:12:55.478 "product_name": "Malloc disk", 00:12:55.478 "block_size": 512, 00:12:55.478 "num_blocks": 65536, 00:12:55.478 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:55.478 "assigned_rate_limits": { 00:12:55.478 "rw_ios_per_sec": 0, 00:12:55.478 "rw_mbytes_per_sec": 0, 00:12:55.478 "r_mbytes_per_sec": 0, 00:12:55.478 "w_mbytes_per_sec": 0 00:12:55.478 }, 00:12:55.478 "claimed": false, 00:12:55.478 "zoned": false, 00:12:55.478 "supported_io_types": { 00:12:55.478 "read": true, 00:12:55.478 "write": true, 00:12:55.478 "unmap": true, 00:12:55.478 "flush": true, 00:12:55.478 "reset": true, 00:12:55.478 "nvme_admin": false, 00:12:55.478 "nvme_io": false, 00:12:55.478 "nvme_io_md": false, 00:12:55.478 "write_zeroes": true, 00:12:55.478 "zcopy": true, 00:12:55.478 "get_zone_info": false, 00:12:55.478 "zone_management": false, 00:12:55.478 "zone_append": false, 00:12:55.478 "compare": false, 00:12:55.478 "compare_and_write": false, 00:12:55.478 "abort": true, 00:12:55.478 "seek_hole": false, 00:12:55.478 "seek_data": false, 00:12:55.478 "copy": true, 00:12:55.478 "nvme_iov_md": false 00:12:55.478 }, 00:12:55.478 "memory_domains": [ 00:12:55.478 { 00:12:55.478 "dma_device_id": "system", 00:12:55.478 "dma_device_type": 1 00:12:55.478 }, 00:12:55.478 { 00:12:55.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.478 "dma_device_type": 2 00:12:55.478 } 00:12:55.478 ], 00:12:55.478 "driver_specific": {} 00:12:55.478 } 00:12:55.478 ] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.737 BaseBdev3 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.737 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.738 [ 00:12:55.738 { 00:12:55.738 "name": "BaseBdev3", 00:12:55.738 "aliases": [ 00:12:55.738 "e32b5785-6996-4002-9810-46685ae4f9b1" 00:12:55.738 ], 00:12:55.738 "product_name": "Malloc disk", 00:12:55.738 "block_size": 512, 00:12:55.738 "num_blocks": 65536, 00:12:55.738 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:55.738 "assigned_rate_limits": { 00:12:55.738 "rw_ios_per_sec": 0, 00:12:55.738 "rw_mbytes_per_sec": 0, 00:12:55.738 "r_mbytes_per_sec": 0, 00:12:55.738 "w_mbytes_per_sec": 0 00:12:55.738 }, 00:12:55.738 "claimed": false, 00:12:55.738 "zoned": false, 00:12:55.738 "supported_io_types": { 00:12:55.738 "read": true, 00:12:55.738 "write": true, 00:12:55.738 "unmap": true, 00:12:55.738 "flush": true, 00:12:55.738 "reset": true, 00:12:55.738 "nvme_admin": false, 00:12:55.738 "nvme_io": false, 00:12:55.738 "nvme_io_md": false, 00:12:55.738 "write_zeroes": true, 00:12:55.738 "zcopy": true, 00:12:55.738 "get_zone_info": false, 00:12:55.738 "zone_management": false, 00:12:55.738 "zone_append": false, 00:12:55.738 "compare": false, 00:12:55.738 "compare_and_write": false, 00:12:55.738 "abort": true, 00:12:55.738 "seek_hole": false, 00:12:55.738 "seek_data": false, 00:12:55.738 "copy": true, 00:12:55.738 "nvme_iov_md": false 00:12:55.738 }, 00:12:55.738 "memory_domains": [ 00:12:55.738 { 00:12:55.738 "dma_device_id": "system", 00:12:55.738 "dma_device_type": 1 00:12:55.738 }, 00:12:55.738 { 00:12:55.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.738 "dma_device_type": 2 00:12:55.738 } 00:12:55.738 ], 00:12:55.738 "driver_specific": {} 00:12:55.738 } 00:12:55.738 ] 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.738 [2024-11-20 11:25:03.367810] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.738 [2024-11-20 11:25:03.368004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.738 [2024-11-20 11:25:03.368049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.738 [2024-11-20 11:25:03.370486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.738 "name": "Existed_Raid", 00:12:55.738 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:55.738 "strip_size_kb": 64, 00:12:55.738 "state": "configuring", 00:12:55.738 "raid_level": "concat", 00:12:55.738 "superblock": true, 00:12:55.738 "num_base_bdevs": 3, 00:12:55.738 "num_base_bdevs_discovered": 2, 00:12:55.738 "num_base_bdevs_operational": 3, 00:12:55.738 "base_bdevs_list": [ 00:12:55.738 { 00:12:55.738 "name": "BaseBdev1", 00:12:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.738 "is_configured": false, 00:12:55.738 "data_offset": 0, 00:12:55.738 "data_size": 0 00:12:55.738 }, 00:12:55.738 { 00:12:55.738 "name": "BaseBdev2", 00:12:55.738 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:55.738 "is_configured": true, 00:12:55.738 "data_offset": 2048, 00:12:55.738 "data_size": 63488 00:12:55.738 }, 00:12:55.738 { 00:12:55.738 "name": "BaseBdev3", 00:12:55.738 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:55.738 "is_configured": true, 00:12:55.738 "data_offset": 2048, 00:12:55.738 "data_size": 63488 00:12:55.738 } 00:12:55.738 ] 00:12:55.738 }' 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.738 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.315 [2024-11-20 11:25:03.868036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.315 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.315 "name": "Existed_Raid", 00:12:56.315 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:56.315 "strip_size_kb": 64, 00:12:56.315 "state": "configuring", 00:12:56.315 "raid_level": "concat", 00:12:56.315 "superblock": true, 00:12:56.315 "num_base_bdevs": 3, 00:12:56.315 "num_base_bdevs_discovered": 1, 00:12:56.315 "num_base_bdevs_operational": 3, 00:12:56.315 "base_bdevs_list": [ 00:12:56.315 { 00:12:56.315 "name": "BaseBdev1", 00:12:56.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.315 "is_configured": false, 00:12:56.315 "data_offset": 0, 00:12:56.315 "data_size": 0 00:12:56.315 }, 00:12:56.315 { 00:12:56.315 "name": null, 00:12:56.315 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:56.315 "is_configured": false, 00:12:56.315 "data_offset": 0, 00:12:56.315 "data_size": 63488 00:12:56.316 }, 00:12:56.316 { 00:12:56.316 "name": "BaseBdev3", 00:12:56.316 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:56.316 "is_configured": true, 00:12:56.316 "data_offset": 2048, 00:12:56.316 "data_size": 63488 00:12:56.316 } 00:12:56.316 ] 00:12:56.316 }' 00:12:56.316 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.316 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.574 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.834 [2024-11-20 11:25:04.439398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.834 BaseBdev1 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.834 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.834 [ 00:12:56.834 { 00:12:56.834 "name": "BaseBdev1", 00:12:56.834 "aliases": [ 00:12:56.834 "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44" 00:12:56.834 ], 00:12:56.834 "product_name": "Malloc disk", 00:12:56.834 "block_size": 512, 00:12:56.834 "num_blocks": 65536, 00:12:56.834 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:56.834 "assigned_rate_limits": { 00:12:56.834 "rw_ios_per_sec": 0, 00:12:56.834 "rw_mbytes_per_sec": 0, 00:12:56.834 "r_mbytes_per_sec": 0, 00:12:56.834 "w_mbytes_per_sec": 0 00:12:56.834 }, 00:12:56.834 "claimed": true, 00:12:56.834 "claim_type": "exclusive_write", 00:12:56.834 "zoned": false, 00:12:56.834 "supported_io_types": { 00:12:56.834 "read": true, 00:12:56.834 "write": true, 00:12:56.834 "unmap": true, 00:12:56.834 "flush": true, 00:12:56.834 "reset": true, 00:12:56.834 "nvme_admin": false, 00:12:56.834 "nvme_io": false, 00:12:56.834 "nvme_io_md": false, 00:12:56.834 "write_zeroes": true, 00:12:56.834 "zcopy": true, 00:12:56.834 "get_zone_info": false, 00:12:56.834 "zone_management": false, 00:12:56.834 "zone_append": false, 00:12:56.834 "compare": false, 00:12:56.834 "compare_and_write": false, 00:12:56.834 "abort": true, 00:12:56.834 "seek_hole": false, 00:12:56.834 "seek_data": false, 00:12:56.834 "copy": true, 00:12:56.834 "nvme_iov_md": false 00:12:56.834 }, 00:12:56.834 "memory_domains": [ 00:12:56.834 { 00:12:56.834 "dma_device_id": "system", 00:12:56.834 "dma_device_type": 1 00:12:56.834 }, 00:12:56.834 { 00:12:56.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.834 "dma_device_type": 2 00:12:56.834 } 00:12:56.834 ], 00:12:56.834 "driver_specific": {} 00:12:56.835 } 00:12:56.835 ] 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.835 "name": "Existed_Raid", 00:12:56.835 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:56.835 "strip_size_kb": 64, 00:12:56.835 "state": "configuring", 00:12:56.835 "raid_level": "concat", 00:12:56.835 "superblock": true, 00:12:56.835 "num_base_bdevs": 3, 00:12:56.835 "num_base_bdevs_discovered": 2, 00:12:56.835 "num_base_bdevs_operational": 3, 00:12:56.835 "base_bdevs_list": [ 00:12:56.835 { 00:12:56.835 "name": "BaseBdev1", 00:12:56.835 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:56.835 "is_configured": true, 00:12:56.835 "data_offset": 2048, 00:12:56.835 "data_size": 63488 00:12:56.835 }, 00:12:56.835 { 00:12:56.835 "name": null, 00:12:56.835 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:56.835 "is_configured": false, 00:12:56.835 "data_offset": 0, 00:12:56.835 "data_size": 63488 00:12:56.835 }, 00:12:56.835 { 00:12:56.835 "name": "BaseBdev3", 00:12:56.835 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:56.835 "is_configured": true, 00:12:56.835 "data_offset": 2048, 00:12:56.835 "data_size": 63488 00:12:56.835 } 00:12:56.835 ] 00:12:56.835 }' 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.835 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.403 [2024-11-20 11:25:05.063665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.403 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.404 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.404 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.404 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.404 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.404 "name": "Existed_Raid", 00:12:57.404 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:57.404 "strip_size_kb": 64, 00:12:57.404 "state": "configuring", 00:12:57.404 "raid_level": "concat", 00:12:57.404 "superblock": true, 00:12:57.404 "num_base_bdevs": 3, 00:12:57.404 "num_base_bdevs_discovered": 1, 00:12:57.404 "num_base_bdevs_operational": 3, 00:12:57.404 "base_bdevs_list": [ 00:12:57.404 { 00:12:57.404 "name": "BaseBdev1", 00:12:57.404 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:57.404 "is_configured": true, 00:12:57.404 "data_offset": 2048, 00:12:57.404 "data_size": 63488 00:12:57.404 }, 00:12:57.404 { 00:12:57.404 "name": null, 00:12:57.404 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:57.404 "is_configured": false, 00:12:57.404 "data_offset": 0, 00:12:57.404 "data_size": 63488 00:12:57.404 }, 00:12:57.404 { 00:12:57.404 "name": null, 00:12:57.404 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:57.404 "is_configured": false, 00:12:57.404 "data_offset": 0, 00:12:57.404 "data_size": 63488 00:12:57.404 } 00:12:57.404 ] 00:12:57.404 }' 00:12:57.404 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.404 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.997 [2024-11-20 11:25:05.635897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.997 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.998 "name": "Existed_Raid", 00:12:57.998 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:57.998 "strip_size_kb": 64, 00:12:57.998 "state": "configuring", 00:12:57.998 "raid_level": "concat", 00:12:57.998 "superblock": true, 00:12:57.998 "num_base_bdevs": 3, 00:12:57.998 "num_base_bdevs_discovered": 2, 00:12:57.998 "num_base_bdevs_operational": 3, 00:12:57.998 "base_bdevs_list": [ 00:12:57.998 { 00:12:57.998 "name": "BaseBdev1", 00:12:57.998 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:57.998 "is_configured": true, 00:12:57.998 "data_offset": 2048, 00:12:57.998 "data_size": 63488 00:12:57.998 }, 00:12:57.998 { 00:12:57.998 "name": null, 00:12:57.998 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:57.998 "is_configured": false, 00:12:57.998 "data_offset": 0, 00:12:57.998 "data_size": 63488 00:12:57.998 }, 00:12:57.998 { 00:12:57.998 "name": "BaseBdev3", 00:12:57.998 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:57.998 "is_configured": true, 00:12:57.998 "data_offset": 2048, 00:12:57.998 "data_size": 63488 00:12:57.998 } 00:12:57.998 ] 00:12:57.998 }' 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.998 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.566 [2024-11-20 11:25:06.224081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.566 "name": "Existed_Raid", 00:12:58.566 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:58.566 "strip_size_kb": 64, 00:12:58.566 "state": "configuring", 00:12:58.566 "raid_level": "concat", 00:12:58.566 "superblock": true, 00:12:58.566 "num_base_bdevs": 3, 00:12:58.566 "num_base_bdevs_discovered": 1, 00:12:58.566 "num_base_bdevs_operational": 3, 00:12:58.566 "base_bdevs_list": [ 00:12:58.566 { 00:12:58.566 "name": null, 00:12:58.566 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:58.566 "is_configured": false, 00:12:58.566 "data_offset": 0, 00:12:58.566 "data_size": 63488 00:12:58.566 }, 00:12:58.566 { 00:12:58.566 "name": null, 00:12:58.566 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:58.566 "is_configured": false, 00:12:58.566 "data_offset": 0, 00:12:58.566 "data_size": 63488 00:12:58.566 }, 00:12:58.566 { 00:12:58.566 "name": "BaseBdev3", 00:12:58.566 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:58.566 "is_configured": true, 00:12:58.566 "data_offset": 2048, 00:12:58.566 "data_size": 63488 00:12:58.566 } 00:12:58.566 ] 00:12:58.566 }' 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.566 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 [2024-11-20 11:25:06.888523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.134 "name": "Existed_Raid", 00:12:59.134 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:59.134 "strip_size_kb": 64, 00:12:59.134 "state": "configuring", 00:12:59.134 "raid_level": "concat", 00:12:59.134 "superblock": true, 00:12:59.134 "num_base_bdevs": 3, 00:12:59.134 "num_base_bdevs_discovered": 2, 00:12:59.134 "num_base_bdevs_operational": 3, 00:12:59.134 "base_bdevs_list": [ 00:12:59.134 { 00:12:59.134 "name": null, 00:12:59.134 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:59.134 "is_configured": false, 00:12:59.134 "data_offset": 0, 00:12:59.134 "data_size": 63488 00:12:59.134 }, 00:12:59.134 { 00:12:59.134 "name": "BaseBdev2", 00:12:59.134 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:59.134 "is_configured": true, 00:12:59.134 "data_offset": 2048, 00:12:59.134 "data_size": 63488 00:12:59.134 }, 00:12:59.134 { 00:12:59.134 "name": "BaseBdev3", 00:12:59.134 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:59.134 "is_configured": true, 00:12:59.134 "data_offset": 2048, 00:12:59.134 "data_size": 63488 00:12:59.134 } 00:12:59.134 ] 00:12:59.134 }' 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.134 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3dcdb92e-9441-419a-aa5b-5c59aa4a1c44 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.702 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 [2024-11-20 11:25:07.543832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:59.702 [2024-11-20 11:25:07.544171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:59.702 [2024-11-20 11:25:07.544196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:59.702 NewBaseBdev 00:12:59.702 [2024-11-20 11:25:07.544541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:59.702 [2024-11-20 11:25:07.544763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:59.702 [2024-11-20 11:25:07.544780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:59.702 [2024-11-20 11:25:07.544972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.962 [ 00:12:59.962 { 00:12:59.962 "name": "NewBaseBdev", 00:12:59.962 "aliases": [ 00:12:59.962 "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44" 00:12:59.962 ], 00:12:59.962 "product_name": "Malloc disk", 00:12:59.962 "block_size": 512, 00:12:59.962 "num_blocks": 65536, 00:12:59.962 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:59.962 "assigned_rate_limits": { 00:12:59.962 "rw_ios_per_sec": 0, 00:12:59.962 "rw_mbytes_per_sec": 0, 00:12:59.962 "r_mbytes_per_sec": 0, 00:12:59.962 "w_mbytes_per_sec": 0 00:12:59.962 }, 00:12:59.962 "claimed": true, 00:12:59.962 "claim_type": "exclusive_write", 00:12:59.962 "zoned": false, 00:12:59.962 "supported_io_types": { 00:12:59.962 "read": true, 00:12:59.962 "write": true, 00:12:59.962 "unmap": true, 00:12:59.962 "flush": true, 00:12:59.962 "reset": true, 00:12:59.962 "nvme_admin": false, 00:12:59.962 "nvme_io": false, 00:12:59.962 "nvme_io_md": false, 00:12:59.962 "write_zeroes": true, 00:12:59.962 "zcopy": true, 00:12:59.962 "get_zone_info": false, 00:12:59.962 "zone_management": false, 00:12:59.962 "zone_append": false, 00:12:59.962 "compare": false, 00:12:59.962 "compare_and_write": false, 00:12:59.962 "abort": true, 00:12:59.962 "seek_hole": false, 00:12:59.962 "seek_data": false, 00:12:59.962 "copy": true, 00:12:59.962 "nvme_iov_md": false 00:12:59.962 }, 00:12:59.962 "memory_domains": [ 00:12:59.962 { 00:12:59.962 "dma_device_id": "system", 00:12:59.962 "dma_device_type": 1 00:12:59.962 }, 00:12:59.962 { 00:12:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.962 "dma_device_type": 2 00:12:59.962 } 00:12:59.962 ], 00:12:59.962 "driver_specific": {} 00:12:59.962 } 00:12:59.962 ] 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.962 "name": "Existed_Raid", 00:12:59.962 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:12:59.962 "strip_size_kb": 64, 00:12:59.962 "state": "online", 00:12:59.962 "raid_level": "concat", 00:12:59.962 "superblock": true, 00:12:59.962 "num_base_bdevs": 3, 00:12:59.962 "num_base_bdevs_discovered": 3, 00:12:59.962 "num_base_bdevs_operational": 3, 00:12:59.962 "base_bdevs_list": [ 00:12:59.962 { 00:12:59.962 "name": "NewBaseBdev", 00:12:59.962 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:12:59.962 "is_configured": true, 00:12:59.962 "data_offset": 2048, 00:12:59.962 "data_size": 63488 00:12:59.962 }, 00:12:59.962 { 00:12:59.962 "name": "BaseBdev2", 00:12:59.962 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:12:59.962 "is_configured": true, 00:12:59.962 "data_offset": 2048, 00:12:59.962 "data_size": 63488 00:12:59.962 }, 00:12:59.962 { 00:12:59.962 "name": "BaseBdev3", 00:12:59.962 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:12:59.962 "is_configured": true, 00:12:59.962 "data_offset": 2048, 00:12:59.962 "data_size": 63488 00:12:59.962 } 00:12:59.962 ] 00:12:59.962 }' 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.962 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:00.531 [2024-11-20 11:25:08.104415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:00.531 "name": "Existed_Raid", 00:13:00.531 "aliases": [ 00:13:00.531 "d277cb7b-6aea-4767-a90e-b3fa27652522" 00:13:00.531 ], 00:13:00.531 "product_name": "Raid Volume", 00:13:00.531 "block_size": 512, 00:13:00.531 "num_blocks": 190464, 00:13:00.531 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:13:00.531 "assigned_rate_limits": { 00:13:00.531 "rw_ios_per_sec": 0, 00:13:00.531 "rw_mbytes_per_sec": 0, 00:13:00.531 "r_mbytes_per_sec": 0, 00:13:00.531 "w_mbytes_per_sec": 0 00:13:00.531 }, 00:13:00.531 "claimed": false, 00:13:00.531 "zoned": false, 00:13:00.531 "supported_io_types": { 00:13:00.531 "read": true, 00:13:00.531 "write": true, 00:13:00.531 "unmap": true, 00:13:00.531 "flush": true, 00:13:00.531 "reset": true, 00:13:00.531 "nvme_admin": false, 00:13:00.531 "nvme_io": false, 00:13:00.531 "nvme_io_md": false, 00:13:00.531 "write_zeroes": true, 00:13:00.531 "zcopy": false, 00:13:00.531 "get_zone_info": false, 00:13:00.531 "zone_management": false, 00:13:00.531 "zone_append": false, 00:13:00.531 "compare": false, 00:13:00.531 "compare_and_write": false, 00:13:00.531 "abort": false, 00:13:00.531 "seek_hole": false, 00:13:00.531 "seek_data": false, 00:13:00.531 "copy": false, 00:13:00.531 "nvme_iov_md": false 00:13:00.531 }, 00:13:00.531 "memory_domains": [ 00:13:00.531 { 00:13:00.531 "dma_device_id": "system", 00:13:00.531 "dma_device_type": 1 00:13:00.531 }, 00:13:00.531 { 00:13:00.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.531 "dma_device_type": 2 00:13:00.531 }, 00:13:00.531 { 00:13:00.531 "dma_device_id": "system", 00:13:00.531 "dma_device_type": 1 00:13:00.531 }, 00:13:00.531 { 00:13:00.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.531 "dma_device_type": 2 00:13:00.531 }, 00:13:00.531 { 00:13:00.531 "dma_device_id": "system", 00:13:00.531 "dma_device_type": 1 00:13:00.531 }, 00:13:00.531 { 00:13:00.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.531 "dma_device_type": 2 00:13:00.531 } 00:13:00.531 ], 00:13:00.531 "driver_specific": { 00:13:00.531 "raid": { 00:13:00.531 "uuid": "d277cb7b-6aea-4767-a90e-b3fa27652522", 00:13:00.531 "strip_size_kb": 64, 00:13:00.531 "state": "online", 00:13:00.531 "raid_level": "concat", 00:13:00.531 "superblock": true, 00:13:00.531 "num_base_bdevs": 3, 00:13:00.531 "num_base_bdevs_discovered": 3, 00:13:00.531 "num_base_bdevs_operational": 3, 00:13:00.531 "base_bdevs_list": [ 00:13:00.531 { 00:13:00.531 "name": "NewBaseBdev", 00:13:00.531 "uuid": "3dcdb92e-9441-419a-aa5b-5c59aa4a1c44", 00:13:00.531 "is_configured": true, 00:13:00.531 "data_offset": 2048, 00:13:00.531 "data_size": 63488 00:13:00.531 }, 00:13:00.531 { 00:13:00.531 "name": "BaseBdev2", 00:13:00.531 "uuid": "416b2381-8a81-4522-807b-537c84a4dd7c", 00:13:00.531 "is_configured": true, 00:13:00.531 "data_offset": 2048, 00:13:00.531 "data_size": 63488 00:13:00.531 }, 00:13:00.531 { 00:13:00.531 "name": "BaseBdev3", 00:13:00.531 "uuid": "e32b5785-6996-4002-9810-46685ae4f9b1", 00:13:00.531 "is_configured": true, 00:13:00.531 "data_offset": 2048, 00:13:00.531 "data_size": 63488 00:13:00.531 } 00:13:00.531 ] 00:13:00.531 } 00:13:00.531 } 00:13:00.531 }' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:00.531 BaseBdev2 00:13:00.531 BaseBdev3' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.531 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.790 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.790 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.790 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.790 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:00.790 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.790 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.790 [2024-11-20 11:25:08.424099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.790 [2024-11-20 11:25:08.424136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.791 [2024-11-20 11:25:08.424224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.791 [2024-11-20 11:25:08.424328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.791 [2024-11-20 11:25:08.424349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66194 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66194 ']' 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66194 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66194 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.791 killing process with pid 66194 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66194' 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66194 00:13:00.791 [2024-11-20 11:25:08.458384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.791 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66194 00:13:01.050 [2024-11-20 11:25:08.723687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.027 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:02.027 00:13:02.027 real 0m11.637s 00:13:02.027 user 0m19.254s 00:13:02.027 sys 0m1.625s 00:13:02.027 11:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.027 11:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.027 ************************************ 00:13:02.027 END TEST raid_state_function_test_sb 00:13:02.027 ************************************ 00:13:02.027 11:25:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:02.027 11:25:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:02.027 11:25:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.027 11:25:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.027 ************************************ 00:13:02.027 START TEST raid_superblock_test 00:13:02.027 ************************************ 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66820 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66820 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66820 ']' 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.027 11:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.286 [2024-11-20 11:25:09.930825] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:02.286 [2024-11-20 11:25:09.930995] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66820 ] 00:13:02.286 [2024-11-20 11:25:10.108380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.552 [2024-11-20 11:25:10.239392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.810 [2024-11-20 11:25:10.442844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.810 [2024-11-20 11:25:10.442909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.069 malloc1 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.069 [2024-11-20 11:25:10.888225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:03.069 [2024-11-20 11:25:10.888338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.069 [2024-11-20 11:25:10.888373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:03.069 [2024-11-20 11:25:10.888389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.069 [2024-11-20 11:25:10.891236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.069 [2024-11-20 11:25:10.891295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:03.069 pt1 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:03.069 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.070 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 malloc2 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 [2024-11-20 11:25:10.943952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:03.329 [2024-11-20 11:25:10.944060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.329 [2024-11-20 11:25:10.944111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:03.329 [2024-11-20 11:25:10.944125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.329 [2024-11-20 11:25:10.946973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.329 [2024-11-20 11:25:10.947046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:03.329 pt2 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.329 11:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 malloc3 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 [2024-11-20 11:25:11.006822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:03.329 [2024-11-20 11:25:11.006884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.329 [2024-11-20 11:25:11.006916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:03.329 [2024-11-20 11:25:11.006932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.329 [2024-11-20 11:25:11.009640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.329 [2024-11-20 11:25:11.009703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:03.329 pt3 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.329 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 [2024-11-20 11:25:11.014880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:03.329 [2024-11-20 11:25:11.017354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:03.329 [2024-11-20 11:25:11.017453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:03.329 [2024-11-20 11:25:11.017698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:03.329 [2024-11-20 11:25:11.017729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:03.329 [2024-11-20 11:25:11.018066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:03.330 [2024-11-20 11:25:11.018289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:03.330 [2024-11-20 11:25:11.018317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:03.330 [2024-11-20 11:25:11.018500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.330 "name": "raid_bdev1", 00:13:03.330 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:03.330 "strip_size_kb": 64, 00:13:03.330 "state": "online", 00:13:03.330 "raid_level": "concat", 00:13:03.330 "superblock": true, 00:13:03.330 "num_base_bdevs": 3, 00:13:03.330 "num_base_bdevs_discovered": 3, 00:13:03.330 "num_base_bdevs_operational": 3, 00:13:03.330 "base_bdevs_list": [ 00:13:03.330 { 00:13:03.330 "name": "pt1", 00:13:03.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.330 "is_configured": true, 00:13:03.330 "data_offset": 2048, 00:13:03.330 "data_size": 63488 00:13:03.330 }, 00:13:03.330 { 00:13:03.330 "name": "pt2", 00:13:03.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.330 "is_configured": true, 00:13:03.330 "data_offset": 2048, 00:13:03.330 "data_size": 63488 00:13:03.330 }, 00:13:03.330 { 00:13:03.330 "name": "pt3", 00:13:03.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.330 "is_configured": true, 00:13:03.330 "data_offset": 2048, 00:13:03.330 "data_size": 63488 00:13:03.330 } 00:13:03.330 ] 00:13:03.330 }' 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.330 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.897 [2024-11-20 11:25:11.523398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.897 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.897 "name": "raid_bdev1", 00:13:03.897 "aliases": [ 00:13:03.897 "ac76eb18-f89c-4398-a647-994d990c8ce2" 00:13:03.897 ], 00:13:03.897 "product_name": "Raid Volume", 00:13:03.897 "block_size": 512, 00:13:03.897 "num_blocks": 190464, 00:13:03.897 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:03.897 "assigned_rate_limits": { 00:13:03.897 "rw_ios_per_sec": 0, 00:13:03.897 "rw_mbytes_per_sec": 0, 00:13:03.897 "r_mbytes_per_sec": 0, 00:13:03.897 "w_mbytes_per_sec": 0 00:13:03.897 }, 00:13:03.897 "claimed": false, 00:13:03.897 "zoned": false, 00:13:03.897 "supported_io_types": { 00:13:03.897 "read": true, 00:13:03.897 "write": true, 00:13:03.897 "unmap": true, 00:13:03.897 "flush": true, 00:13:03.897 "reset": true, 00:13:03.897 "nvme_admin": false, 00:13:03.897 "nvme_io": false, 00:13:03.897 "nvme_io_md": false, 00:13:03.897 "write_zeroes": true, 00:13:03.897 "zcopy": false, 00:13:03.897 "get_zone_info": false, 00:13:03.897 "zone_management": false, 00:13:03.897 "zone_append": false, 00:13:03.897 "compare": false, 00:13:03.897 "compare_and_write": false, 00:13:03.897 "abort": false, 00:13:03.897 "seek_hole": false, 00:13:03.897 "seek_data": false, 00:13:03.897 "copy": false, 00:13:03.897 "nvme_iov_md": false 00:13:03.897 }, 00:13:03.897 "memory_domains": [ 00:13:03.897 { 00:13:03.897 "dma_device_id": "system", 00:13:03.897 "dma_device_type": 1 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.897 "dma_device_type": 2 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "dma_device_id": "system", 00:13:03.897 "dma_device_type": 1 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.897 "dma_device_type": 2 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "dma_device_id": "system", 00:13:03.897 "dma_device_type": 1 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.897 "dma_device_type": 2 00:13:03.897 } 00:13:03.897 ], 00:13:03.897 "driver_specific": { 00:13:03.897 "raid": { 00:13:03.897 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:03.897 "strip_size_kb": 64, 00:13:03.897 "state": "online", 00:13:03.897 "raid_level": "concat", 00:13:03.897 "superblock": true, 00:13:03.897 "num_base_bdevs": 3, 00:13:03.897 "num_base_bdevs_discovered": 3, 00:13:03.897 "num_base_bdevs_operational": 3, 00:13:03.897 "base_bdevs_list": [ 00:13:03.897 { 00:13:03.897 "name": "pt1", 00:13:03.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.897 "is_configured": true, 00:13:03.897 "data_offset": 2048, 00:13:03.897 "data_size": 63488 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "name": "pt2", 00:13:03.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.897 "is_configured": true, 00:13:03.897 "data_offset": 2048, 00:13:03.897 "data_size": 63488 00:13:03.897 }, 00:13:03.897 { 00:13:03.897 "name": "pt3", 00:13:03.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.897 "is_configured": true, 00:13:03.897 "data_offset": 2048, 00:13:03.897 "data_size": 63488 00:13:03.897 } 00:13:03.898 ] 00:13:03.898 } 00:13:03.898 } 00:13:03.898 }' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:03.898 pt2 00:13:03.898 pt3' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.898 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:04.156 [2024-11-20 11:25:11.843420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ac76eb18-f89c-4398-a647-994d990c8ce2 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ac76eb18-f89c-4398-a647-994d990c8ce2 ']' 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.156 [2024-11-20 11:25:11.887067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.156 [2024-11-20 11:25:11.887116] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.156 [2024-11-20 11:25:11.887214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.156 [2024-11-20 11:25:11.887298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:04.156 [2024-11-20 11:25:11.887313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.156 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.157 11:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.415 [2024-11-20 11:25:12.011171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:04.415 [2024-11-20 11:25:12.013818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:04.415 [2024-11-20 11:25:12.013896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:04.415 [2024-11-20 11:25:12.013965] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:04.415 [2024-11-20 11:25:12.014098] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:04.415 [2024-11-20 11:25:12.014131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:04.415 [2024-11-20 11:25:12.014157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:04.415 [2024-11-20 11:25:12.014170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:04.415 request: 00:13:04.415 { 00:13:04.415 "name": "raid_bdev1", 00:13:04.415 "raid_level": "concat", 00:13:04.415 "base_bdevs": [ 00:13:04.415 "malloc1", 00:13:04.415 "malloc2", 00:13:04.415 "malloc3" 00:13:04.415 ], 00:13:04.415 "strip_size_kb": 64, 00:13:04.415 "superblock": false, 00:13:04.415 "method": "bdev_raid_create", 00:13:04.415 "req_id": 1 00:13:04.415 } 00:13:04.415 Got JSON-RPC error response 00:13:04.415 response: 00:13:04.415 { 00:13:04.415 "code": -17, 00:13:04.415 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:04.415 } 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.415 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.415 [2024-11-20 11:25:12.071157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:04.415 [2024-11-20 11:25:12.071225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.415 [2024-11-20 11:25:12.071253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:04.415 [2024-11-20 11:25:12.071266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.415 [2024-11-20 11:25:12.074262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.415 [2024-11-20 11:25:12.074306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:04.415 [2024-11-20 11:25:12.074440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:04.415 [2024-11-20 11:25:12.074535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:04.415 pt1 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.416 "name": "raid_bdev1", 00:13:04.416 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:04.416 "strip_size_kb": 64, 00:13:04.416 "state": "configuring", 00:13:04.416 "raid_level": "concat", 00:13:04.416 "superblock": true, 00:13:04.416 "num_base_bdevs": 3, 00:13:04.416 "num_base_bdevs_discovered": 1, 00:13:04.416 "num_base_bdevs_operational": 3, 00:13:04.416 "base_bdevs_list": [ 00:13:04.416 { 00:13:04.416 "name": "pt1", 00:13:04.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.416 "is_configured": true, 00:13:04.416 "data_offset": 2048, 00:13:04.416 "data_size": 63488 00:13:04.416 }, 00:13:04.416 { 00:13:04.416 "name": null, 00:13:04.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.416 "is_configured": false, 00:13:04.416 "data_offset": 2048, 00:13:04.416 "data_size": 63488 00:13:04.416 }, 00:13:04.416 { 00:13:04.416 "name": null, 00:13:04.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.416 "is_configured": false, 00:13:04.416 "data_offset": 2048, 00:13:04.416 "data_size": 63488 00:13:04.416 } 00:13:04.416 ] 00:13:04.416 }' 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.416 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.983 [2024-11-20 11:25:12.555353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.983 [2024-11-20 11:25:12.555487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.983 [2024-11-20 11:25:12.555534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:04.983 [2024-11-20 11:25:12.555549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.983 [2024-11-20 11:25:12.556145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.983 [2024-11-20 11:25:12.556181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.983 [2024-11-20 11:25:12.556290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:04.983 [2024-11-20 11:25:12.556320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.983 pt2 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.983 [2024-11-20 11:25:12.563341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.983 "name": "raid_bdev1", 00:13:04.983 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:04.983 "strip_size_kb": 64, 00:13:04.983 "state": "configuring", 00:13:04.983 "raid_level": "concat", 00:13:04.983 "superblock": true, 00:13:04.983 "num_base_bdevs": 3, 00:13:04.983 "num_base_bdevs_discovered": 1, 00:13:04.983 "num_base_bdevs_operational": 3, 00:13:04.983 "base_bdevs_list": [ 00:13:04.983 { 00:13:04.983 "name": "pt1", 00:13:04.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.983 "is_configured": true, 00:13:04.983 "data_offset": 2048, 00:13:04.983 "data_size": 63488 00:13:04.983 }, 00:13:04.983 { 00:13:04.983 "name": null, 00:13:04.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.983 "is_configured": false, 00:13:04.983 "data_offset": 0, 00:13:04.983 "data_size": 63488 00:13:04.983 }, 00:13:04.983 { 00:13:04.983 "name": null, 00:13:04.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.983 "is_configured": false, 00:13:04.983 "data_offset": 2048, 00:13:04.983 "data_size": 63488 00:13:04.983 } 00:13:04.983 ] 00:13:04.983 }' 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.983 11:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 [2024-11-20 11:25:13.075483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:05.242 [2024-11-20 11:25:13.075581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.242 [2024-11-20 11:25:13.075609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:05.242 [2024-11-20 11:25:13.075640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.242 [2024-11-20 11:25:13.076204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.242 [2024-11-20 11:25:13.076246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:05.242 [2024-11-20 11:25:13.076346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:05.242 [2024-11-20 11:25:13.076391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:05.242 pt2 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.242 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 [2024-11-20 11:25:13.083442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:05.242 [2024-11-20 11:25:13.083498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.242 [2024-11-20 11:25:13.083519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:05.242 [2024-11-20 11:25:13.083535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.242 [2024-11-20 11:25:13.083996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.242 [2024-11-20 11:25:13.084040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:05.242 [2024-11-20 11:25:13.084117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:05.242 [2024-11-20 11:25:13.084150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:05.242 [2024-11-20 11:25:13.084292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:05.242 [2024-11-20 11:25:13.084312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:05.242 [2024-11-20 11:25:13.084641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:05.242 [2024-11-20 11:25:13.084823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:05.242 [2024-11-20 11:25:13.084838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:05.242 [2024-11-20 11:25:13.085000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.501 pt3 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.501 "name": "raid_bdev1", 00:13:05.501 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:05.501 "strip_size_kb": 64, 00:13:05.501 "state": "online", 00:13:05.501 "raid_level": "concat", 00:13:05.501 "superblock": true, 00:13:05.501 "num_base_bdevs": 3, 00:13:05.501 "num_base_bdevs_discovered": 3, 00:13:05.501 "num_base_bdevs_operational": 3, 00:13:05.501 "base_bdevs_list": [ 00:13:05.501 { 00:13:05.501 "name": "pt1", 00:13:05.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.501 "is_configured": true, 00:13:05.501 "data_offset": 2048, 00:13:05.501 "data_size": 63488 00:13:05.501 }, 00:13:05.501 { 00:13:05.501 "name": "pt2", 00:13:05.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.501 "is_configured": true, 00:13:05.501 "data_offset": 2048, 00:13:05.501 "data_size": 63488 00:13:05.501 }, 00:13:05.501 { 00:13:05.501 "name": "pt3", 00:13:05.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.501 "is_configured": true, 00:13:05.501 "data_offset": 2048, 00:13:05.501 "data_size": 63488 00:13:05.501 } 00:13:05.501 ] 00:13:05.501 }' 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.501 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.068 [2024-11-20 11:25:13.628051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:06.068 "name": "raid_bdev1", 00:13:06.068 "aliases": [ 00:13:06.068 "ac76eb18-f89c-4398-a647-994d990c8ce2" 00:13:06.068 ], 00:13:06.068 "product_name": "Raid Volume", 00:13:06.068 "block_size": 512, 00:13:06.068 "num_blocks": 190464, 00:13:06.068 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:06.068 "assigned_rate_limits": { 00:13:06.068 "rw_ios_per_sec": 0, 00:13:06.068 "rw_mbytes_per_sec": 0, 00:13:06.068 "r_mbytes_per_sec": 0, 00:13:06.068 "w_mbytes_per_sec": 0 00:13:06.068 }, 00:13:06.068 "claimed": false, 00:13:06.068 "zoned": false, 00:13:06.068 "supported_io_types": { 00:13:06.068 "read": true, 00:13:06.068 "write": true, 00:13:06.068 "unmap": true, 00:13:06.068 "flush": true, 00:13:06.068 "reset": true, 00:13:06.068 "nvme_admin": false, 00:13:06.068 "nvme_io": false, 00:13:06.068 "nvme_io_md": false, 00:13:06.068 "write_zeroes": true, 00:13:06.068 "zcopy": false, 00:13:06.068 "get_zone_info": false, 00:13:06.068 "zone_management": false, 00:13:06.068 "zone_append": false, 00:13:06.068 "compare": false, 00:13:06.068 "compare_and_write": false, 00:13:06.068 "abort": false, 00:13:06.068 "seek_hole": false, 00:13:06.068 "seek_data": false, 00:13:06.068 "copy": false, 00:13:06.068 "nvme_iov_md": false 00:13:06.068 }, 00:13:06.068 "memory_domains": [ 00:13:06.068 { 00:13:06.068 "dma_device_id": "system", 00:13:06.068 "dma_device_type": 1 00:13:06.068 }, 00:13:06.068 { 00:13:06.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.068 "dma_device_type": 2 00:13:06.068 }, 00:13:06.068 { 00:13:06.068 "dma_device_id": "system", 00:13:06.068 "dma_device_type": 1 00:13:06.068 }, 00:13:06.068 { 00:13:06.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.068 "dma_device_type": 2 00:13:06.068 }, 00:13:06.068 { 00:13:06.068 "dma_device_id": "system", 00:13:06.068 "dma_device_type": 1 00:13:06.068 }, 00:13:06.068 { 00:13:06.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.068 "dma_device_type": 2 00:13:06.068 } 00:13:06.068 ], 00:13:06.068 "driver_specific": { 00:13:06.068 "raid": { 00:13:06.068 "uuid": "ac76eb18-f89c-4398-a647-994d990c8ce2", 00:13:06.068 "strip_size_kb": 64, 00:13:06.068 "state": "online", 00:13:06.068 "raid_level": "concat", 00:13:06.068 "superblock": true, 00:13:06.068 "num_base_bdevs": 3, 00:13:06.068 "num_base_bdevs_discovered": 3, 00:13:06.068 "num_base_bdevs_operational": 3, 00:13:06.068 "base_bdevs_list": [ 00:13:06.068 { 00:13:06.068 "name": "pt1", 00:13:06.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.068 "is_configured": true, 00:13:06.068 "data_offset": 2048, 00:13:06.068 "data_size": 63488 00:13:06.068 }, 00:13:06.068 { 00:13:06.068 "name": "pt2", 00:13:06.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.068 "is_configured": true, 00:13:06.068 "data_offset": 2048, 00:13:06.068 "data_size": 63488 00:13:06.068 }, 00:13:06.068 { 00:13:06.068 "name": "pt3", 00:13:06.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.068 "is_configured": true, 00:13:06.068 "data_offset": 2048, 00:13:06.068 "data_size": 63488 00:13:06.068 } 00:13:06.068 ] 00:13:06.068 } 00:13:06.068 } 00:13:06.068 }' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:06.068 pt2 00:13:06.068 pt3' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.068 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.327 [2024-11-20 11:25:13.948091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ac76eb18-f89c-4398-a647-994d990c8ce2 '!=' ac76eb18-f89c-4398-a647-994d990c8ce2 ']' 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66820 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66820 ']' 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66820 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:06.327 11:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.327 11:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66820 00:13:06.327 killing process with pid 66820 00:13:06.328 11:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.328 11:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.328 11:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66820' 00:13:06.328 11:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66820 00:13:06.328 [2024-11-20 11:25:14.028365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.328 11:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66820 00:13:06.328 [2024-11-20 11:25:14.028482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.328 [2024-11-20 11:25:14.028561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.328 [2024-11-20 11:25:14.028580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:06.586 [2024-11-20 11:25:14.299655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.521 11:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:07.521 00:13:07.521 real 0m5.507s 00:13:07.521 user 0m8.267s 00:13:07.521 sys 0m0.808s 00:13:07.521 11:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.521 ************************************ 00:13:07.521 END TEST raid_superblock_test 00:13:07.521 ************************************ 00:13:07.521 11:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.779 11:25:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:07.779 11:25:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:07.779 11:25:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.779 11:25:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.779 ************************************ 00:13:07.779 START TEST raid_read_error_test 00:13:07.779 ************************************ 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:07.779 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4YJGCRKXL8 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67083 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67083 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67083 ']' 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.780 11:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.780 [2024-11-20 11:25:15.502221] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:07.780 [2024-11-20 11:25:15.502414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67083 ] 00:13:08.038 [2024-11-20 11:25:15.690714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.038 [2024-11-20 11:25:15.843432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.296 [2024-11-20 11:25:16.049834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.296 [2024-11-20 11:25:16.049913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 BaseBdev1_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 true 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 [2024-11-20 11:25:16.532486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:08.863 [2024-11-20 11:25:16.532552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.863 [2024-11-20 11:25:16.532583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:08.863 [2024-11-20 11:25:16.532603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.863 [2024-11-20 11:25:16.535601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.863 [2024-11-20 11:25:16.535682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.863 BaseBdev1 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 BaseBdev2_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 true 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 [2024-11-20 11:25:16.593245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:08.863 [2024-11-20 11:25:16.593311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.863 [2024-11-20 11:25:16.593338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:08.863 [2024-11-20 11:25:16.593357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.863 [2024-11-20 11:25:16.596175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.863 [2024-11-20 11:25:16.596236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.863 BaseBdev2 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 BaseBdev3_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.863 true 00:13:08.863 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.864 [2024-11-20 11:25:16.663645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:08.864 [2024-11-20 11:25:16.663707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.864 [2024-11-20 11:25:16.663742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:08.864 [2024-11-20 11:25:16.663760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.864 [2024-11-20 11:25:16.666531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.864 [2024-11-20 11:25:16.666578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:08.864 BaseBdev3 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.864 [2024-11-20 11:25:16.671740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.864 [2024-11-20 11:25:16.674217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.864 [2024-11-20 11:25:16.674334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:08.864 [2024-11-20 11:25:16.674658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:08.864 [2024-11-20 11:25:16.674688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:08.864 [2024-11-20 11:25:16.675009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:08.864 [2024-11-20 11:25:16.675229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:08.864 [2024-11-20 11:25:16.675263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:08.864 [2024-11-20 11:25:16.675453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.864 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.123 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.123 "name": "raid_bdev1", 00:13:09.123 "uuid": "55e3ad5e-3f6d-4a9d-82d0-3f7311306651", 00:13:09.123 "strip_size_kb": 64, 00:13:09.123 "state": "online", 00:13:09.123 "raid_level": "concat", 00:13:09.123 "superblock": true, 00:13:09.123 "num_base_bdevs": 3, 00:13:09.123 "num_base_bdevs_discovered": 3, 00:13:09.123 "num_base_bdevs_operational": 3, 00:13:09.123 "base_bdevs_list": [ 00:13:09.123 { 00:13:09.123 "name": "BaseBdev1", 00:13:09.123 "uuid": "7f709513-eacf-50ab-ba08-f1592198b474", 00:13:09.123 "is_configured": true, 00:13:09.123 "data_offset": 2048, 00:13:09.123 "data_size": 63488 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "name": "BaseBdev2", 00:13:09.123 "uuid": "99dea9d9-c96b-5cee-b081-91ca50328bd9", 00:13:09.123 "is_configured": true, 00:13:09.123 "data_offset": 2048, 00:13:09.123 "data_size": 63488 00:13:09.123 }, 00:13:09.123 { 00:13:09.123 "name": "BaseBdev3", 00:13:09.123 "uuid": "096783cd-f4aa-5afd-8727-0ff5dd79b14c", 00:13:09.123 "is_configured": true, 00:13:09.123 "data_offset": 2048, 00:13:09.123 "data_size": 63488 00:13:09.123 } 00:13:09.123 ] 00:13:09.123 }' 00:13:09.123 11:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.123 11:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.380 11:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:09.380 11:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.638 [2024-11-20 11:25:17.325282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.573 "name": "raid_bdev1", 00:13:10.573 "uuid": "55e3ad5e-3f6d-4a9d-82d0-3f7311306651", 00:13:10.573 "strip_size_kb": 64, 00:13:10.573 "state": "online", 00:13:10.573 "raid_level": "concat", 00:13:10.573 "superblock": true, 00:13:10.573 "num_base_bdevs": 3, 00:13:10.573 "num_base_bdevs_discovered": 3, 00:13:10.573 "num_base_bdevs_operational": 3, 00:13:10.573 "base_bdevs_list": [ 00:13:10.573 { 00:13:10.573 "name": "BaseBdev1", 00:13:10.573 "uuid": "7f709513-eacf-50ab-ba08-f1592198b474", 00:13:10.573 "is_configured": true, 00:13:10.573 "data_offset": 2048, 00:13:10.573 "data_size": 63488 00:13:10.573 }, 00:13:10.573 { 00:13:10.573 "name": "BaseBdev2", 00:13:10.573 "uuid": "99dea9d9-c96b-5cee-b081-91ca50328bd9", 00:13:10.573 "is_configured": true, 00:13:10.573 "data_offset": 2048, 00:13:10.573 "data_size": 63488 00:13:10.573 }, 00:13:10.573 { 00:13:10.573 "name": "BaseBdev3", 00:13:10.573 "uuid": "096783cd-f4aa-5afd-8727-0ff5dd79b14c", 00:13:10.573 "is_configured": true, 00:13:10.573 "data_offset": 2048, 00:13:10.573 "data_size": 63488 00:13:10.573 } 00:13:10.573 ] 00:13:10.573 }' 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.573 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.139 [2024-11-20 11:25:18.756671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.139 [2024-11-20 11:25:18.756738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.139 [2024-11-20 11:25:18.761032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.139 [2024-11-20 11:25:18.761232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.139 [2024-11-20 11:25:18.761323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.139 [2024-11-20 11:25:18.761370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:11.139 { 00:13:11.139 "results": [ 00:13:11.139 { 00:13:11.139 "job": "raid_bdev1", 00:13:11.139 "core_mask": "0x1", 00:13:11.139 "workload": "randrw", 00:13:11.139 "percentage": 50, 00:13:11.139 "status": "finished", 00:13:11.139 "queue_depth": 1, 00:13:11.139 "io_size": 131072, 00:13:11.139 "runtime": 1.429045, 00:13:11.139 "iops": 10359.365870214024, 00:13:11.139 "mibps": 1294.920733776753, 00:13:11.139 "io_failed": 1, 00:13:11.139 "io_timeout": 0, 00:13:11.139 "avg_latency_us": 134.78831893402108, 00:13:11.139 "min_latency_us": 39.79636363636364, 00:13:11.139 "max_latency_us": 1921.3963636363637 00:13:11.139 } 00:13:11.139 ], 00:13:11.139 "core_count": 1 00:13:11.139 } 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67083 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67083 ']' 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67083 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67083 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.139 killing process with pid 67083 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67083' 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67083 00:13:11.139 [2024-11-20 11:25:18.798295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.139 11:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67083 00:13:11.396 [2024-11-20 11:25:19.001496] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4YJGCRKXL8 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:12.330 00:13:12.330 real 0m4.703s 00:13:12.330 user 0m5.840s 00:13:12.330 sys 0m0.586s 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.330 11:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.330 ************************************ 00:13:12.330 END TEST raid_read_error_test 00:13:12.330 ************************************ 00:13:12.330 11:25:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:12.330 11:25:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:12.330 11:25:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.330 11:25:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.330 ************************************ 00:13:12.330 START TEST raid_write_error_test 00:13:12.330 ************************************ 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i6pnp2vNrD 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67224 00:13:12.330 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67224 00:13:12.331 11:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67224 ']' 00:13:12.331 11:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:12.331 11:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.331 11:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.331 11:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.331 11:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.331 11:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.588 [2024-11-20 11:25:20.265111] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:12.588 [2024-11-20 11:25:20.265295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67224 ] 00:13:12.846 [2024-11-20 11:25:20.447692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.846 [2024-11-20 11:25:20.574975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.103 [2024-11-20 11:25:20.779158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.103 [2024-11-20 11:25:20.779233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 BaseBdev1_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 true 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 [2024-11-20 11:25:21.318088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:13.671 [2024-11-20 11:25:21.318151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.671 [2024-11-20 11:25:21.318180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:13.671 [2024-11-20 11:25:21.318197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.671 [2024-11-20 11:25:21.320959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.671 [2024-11-20 11:25:21.321006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:13.671 BaseBdev1 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 BaseBdev2_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 true 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 [2024-11-20 11:25:21.377768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:13.671 [2024-11-20 11:25:21.377828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.671 [2024-11-20 11:25:21.377854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:13.671 [2024-11-20 11:25:21.377872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.671 [2024-11-20 11:25:21.380760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.671 [2024-11-20 11:25:21.380806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:13.671 BaseBdev2 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 BaseBdev3_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 true 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 [2024-11-20 11:25:21.463368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:13.671 [2024-11-20 11:25:21.463476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.671 [2024-11-20 11:25:21.463501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:13.671 [2024-11-20 11:25:21.463517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.671 [2024-11-20 11:25:21.466532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.671 [2024-11-20 11:25:21.466592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:13.671 BaseBdev3 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.671 [2024-11-20 11:25:21.471578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.671 [2024-11-20 11:25:21.474217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:13.671 [2024-11-20 11:25:21.474336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:13.671 [2024-11-20 11:25:21.474652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:13.671 [2024-11-20 11:25:21.474681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:13.671 [2024-11-20 11:25:21.475007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:13.671 [2024-11-20 11:25:21.475231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:13.671 [2024-11-20 11:25:21.475255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:13.671 [2024-11-20 11:25:21.475487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.671 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.672 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.672 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.672 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.672 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.672 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.672 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.930 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.930 "name": "raid_bdev1", 00:13:13.930 "uuid": "1a7c1211-5143-4863-8f12-2d0af20c06f9", 00:13:13.930 "strip_size_kb": 64, 00:13:13.930 "state": "online", 00:13:13.930 "raid_level": "concat", 00:13:13.930 "superblock": true, 00:13:13.930 "num_base_bdevs": 3, 00:13:13.930 "num_base_bdevs_discovered": 3, 00:13:13.930 "num_base_bdevs_operational": 3, 00:13:13.930 "base_bdevs_list": [ 00:13:13.930 { 00:13:13.930 "name": "BaseBdev1", 00:13:13.930 "uuid": "e7f716d8-a871-530c-a5da-1243382a872c", 00:13:13.930 "is_configured": true, 00:13:13.930 "data_offset": 2048, 00:13:13.930 "data_size": 63488 00:13:13.930 }, 00:13:13.930 { 00:13:13.930 "name": "BaseBdev2", 00:13:13.930 "uuid": "a48f3b2d-33ce-5a98-86c8-6e8aa321e419", 00:13:13.930 "is_configured": true, 00:13:13.930 "data_offset": 2048, 00:13:13.930 "data_size": 63488 00:13:13.930 }, 00:13:13.930 { 00:13:13.930 "name": "BaseBdev3", 00:13:13.931 "uuid": "739347c6-d8e2-5f62-9704-c6e7f26f5324", 00:13:13.931 "is_configured": true, 00:13:13.931 "data_offset": 2048, 00:13:13.931 "data_size": 63488 00:13:13.931 } 00:13:13.931 ] 00:13:13.931 }' 00:13:13.931 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.931 11:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.189 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:14.189 11:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:14.448 [2024-11-20 11:25:22.097146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:15.385 11:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:15.385 11:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.385 11:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.385 "name": "raid_bdev1", 00:13:15.385 "uuid": "1a7c1211-5143-4863-8f12-2d0af20c06f9", 00:13:15.385 "strip_size_kb": 64, 00:13:15.385 "state": "online", 00:13:15.385 "raid_level": "concat", 00:13:15.385 "superblock": true, 00:13:15.385 "num_base_bdevs": 3, 00:13:15.385 "num_base_bdevs_discovered": 3, 00:13:15.385 "num_base_bdevs_operational": 3, 00:13:15.385 "base_bdevs_list": [ 00:13:15.385 { 00:13:15.385 "name": "BaseBdev1", 00:13:15.385 "uuid": "e7f716d8-a871-530c-a5da-1243382a872c", 00:13:15.385 "is_configured": true, 00:13:15.385 "data_offset": 2048, 00:13:15.385 "data_size": 63488 00:13:15.385 }, 00:13:15.385 { 00:13:15.385 "name": "BaseBdev2", 00:13:15.385 "uuid": "a48f3b2d-33ce-5a98-86c8-6e8aa321e419", 00:13:15.385 "is_configured": true, 00:13:15.385 "data_offset": 2048, 00:13:15.385 "data_size": 63488 00:13:15.385 }, 00:13:15.385 { 00:13:15.385 "name": "BaseBdev3", 00:13:15.385 "uuid": "739347c6-d8e2-5f62-9704-c6e7f26f5324", 00:13:15.385 "is_configured": true, 00:13:15.385 "data_offset": 2048, 00:13:15.385 "data_size": 63488 00:13:15.385 } 00:13:15.385 ] 00:13:15.385 }' 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.385 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.974 [2024-11-20 11:25:23.520004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.974 [2024-11-20 11:25:23.520043] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.974 [2024-11-20 11:25:23.523347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.974 [2024-11-20 11:25:23.523413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.974 [2024-11-20 11:25:23.523468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.974 [2024-11-20 11:25:23.523486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:15.974 { 00:13:15.974 "results": [ 00:13:15.974 { 00:13:15.974 "job": "raid_bdev1", 00:13:15.974 "core_mask": "0x1", 00:13:15.974 "workload": "randrw", 00:13:15.974 "percentage": 50, 00:13:15.974 "status": "finished", 00:13:15.974 "queue_depth": 1, 00:13:15.974 "io_size": 131072, 00:13:15.974 "runtime": 1.420249, 00:13:15.974 "iops": 10723.47172925311, 00:13:15.974 "mibps": 1340.4339661566387, 00:13:15.974 "io_failed": 1, 00:13:15.974 "io_timeout": 0, 00:13:15.974 "avg_latency_us": 130.1645576903564, 00:13:15.974 "min_latency_us": 38.167272727272724, 00:13:15.974 "max_latency_us": 1861.8181818181818 00:13:15.974 } 00:13:15.974 ], 00:13:15.974 "core_count": 1 00:13:15.974 } 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67224 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67224 ']' 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67224 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67224 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.974 killing process with pid 67224 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67224' 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67224 00:13:15.974 11:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67224 00:13:15.974 [2024-11-20 11:25:23.558447] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.974 [2024-11-20 11:25:23.766553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i6pnp2vNrD 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:17.351 00:13:17.351 real 0m4.728s 00:13:17.351 user 0m5.846s 00:13:17.351 sys 0m0.570s 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.351 11:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.351 ************************************ 00:13:17.351 END TEST raid_write_error_test 00:13:17.351 ************************************ 00:13:17.351 11:25:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:17.351 11:25:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:17.351 11:25:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:17.351 11:25:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.351 11:25:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.351 ************************************ 00:13:17.351 START TEST raid_state_function_test 00:13:17.351 ************************************ 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67368 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67368' 00:13:17.351 Process raid pid: 67368 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67368 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67368 ']' 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.351 11:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.351 [2024-11-20 11:25:25.033120] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:17.351 [2024-11-20 11:25:25.033298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.609 [2024-11-20 11:25:25.220207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.609 [2024-11-20 11:25:25.352467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.868 [2024-11-20 11:25:25.561714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.868 [2024-11-20 11:25:25.561755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.436 [2024-11-20 11:25:26.029922] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.436 [2024-11-20 11:25:26.029995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.436 [2024-11-20 11:25:26.030012] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.436 [2024-11-20 11:25:26.030030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.436 [2024-11-20 11:25:26.030040] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.436 [2024-11-20 11:25:26.030064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.436 "name": "Existed_Raid", 00:13:18.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.436 "strip_size_kb": 0, 00:13:18.436 "state": "configuring", 00:13:18.436 "raid_level": "raid1", 00:13:18.436 "superblock": false, 00:13:18.436 "num_base_bdevs": 3, 00:13:18.436 "num_base_bdevs_discovered": 0, 00:13:18.436 "num_base_bdevs_operational": 3, 00:13:18.436 "base_bdevs_list": [ 00:13:18.436 { 00:13:18.436 "name": "BaseBdev1", 00:13:18.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.436 "is_configured": false, 00:13:18.436 "data_offset": 0, 00:13:18.436 "data_size": 0 00:13:18.436 }, 00:13:18.436 { 00:13:18.436 "name": "BaseBdev2", 00:13:18.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.436 "is_configured": false, 00:13:18.436 "data_offset": 0, 00:13:18.436 "data_size": 0 00:13:18.436 }, 00:13:18.436 { 00:13:18.436 "name": "BaseBdev3", 00:13:18.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.436 "is_configured": false, 00:13:18.436 "data_offset": 0, 00:13:18.436 "data_size": 0 00:13:18.436 } 00:13:18.436 ] 00:13:18.436 }' 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.436 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.004 [2024-11-20 11:25:26.558011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.004 [2024-11-20 11:25:26.558064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.004 [2024-11-20 11:25:26.565967] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.004 [2024-11-20 11:25:26.566019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.004 [2024-11-20 11:25:26.566045] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.004 [2024-11-20 11:25:26.566062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.004 [2024-11-20 11:25:26.566072] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:19.004 [2024-11-20 11:25:26.566087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.004 [2024-11-20 11:25:26.612300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.004 BaseBdev1 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.004 [ 00:13:19.004 { 00:13:19.004 "name": "BaseBdev1", 00:13:19.004 "aliases": [ 00:13:19.004 "60e2fd89-42ad-41c9-8cf2-20801448d9f0" 00:13:19.004 ], 00:13:19.004 "product_name": "Malloc disk", 00:13:19.004 "block_size": 512, 00:13:19.004 "num_blocks": 65536, 00:13:19.004 "uuid": "60e2fd89-42ad-41c9-8cf2-20801448d9f0", 00:13:19.004 "assigned_rate_limits": { 00:13:19.004 "rw_ios_per_sec": 0, 00:13:19.004 "rw_mbytes_per_sec": 0, 00:13:19.004 "r_mbytes_per_sec": 0, 00:13:19.004 "w_mbytes_per_sec": 0 00:13:19.004 }, 00:13:19.004 "claimed": true, 00:13:19.004 "claim_type": "exclusive_write", 00:13:19.004 "zoned": false, 00:13:19.004 "supported_io_types": { 00:13:19.004 "read": true, 00:13:19.004 "write": true, 00:13:19.004 "unmap": true, 00:13:19.004 "flush": true, 00:13:19.004 "reset": true, 00:13:19.004 "nvme_admin": false, 00:13:19.004 "nvme_io": false, 00:13:19.004 "nvme_io_md": false, 00:13:19.004 "write_zeroes": true, 00:13:19.004 "zcopy": true, 00:13:19.004 "get_zone_info": false, 00:13:19.004 "zone_management": false, 00:13:19.004 "zone_append": false, 00:13:19.004 "compare": false, 00:13:19.004 "compare_and_write": false, 00:13:19.004 "abort": true, 00:13:19.004 "seek_hole": false, 00:13:19.004 "seek_data": false, 00:13:19.004 "copy": true, 00:13:19.004 "nvme_iov_md": false 00:13:19.004 }, 00:13:19.004 "memory_domains": [ 00:13:19.004 { 00:13:19.004 "dma_device_id": "system", 00:13:19.004 "dma_device_type": 1 00:13:19.004 }, 00:13:19.004 { 00:13:19.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.004 "dma_device_type": 2 00:13:19.004 } 00:13:19.004 ], 00:13:19.004 "driver_specific": {} 00:13:19.004 } 00:13:19.004 ] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.004 "name": "Existed_Raid", 00:13:19.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.004 "strip_size_kb": 0, 00:13:19.004 "state": "configuring", 00:13:19.004 "raid_level": "raid1", 00:13:19.004 "superblock": false, 00:13:19.004 "num_base_bdevs": 3, 00:13:19.004 "num_base_bdevs_discovered": 1, 00:13:19.004 "num_base_bdevs_operational": 3, 00:13:19.004 "base_bdevs_list": [ 00:13:19.004 { 00:13:19.004 "name": "BaseBdev1", 00:13:19.004 "uuid": "60e2fd89-42ad-41c9-8cf2-20801448d9f0", 00:13:19.004 "is_configured": true, 00:13:19.004 "data_offset": 0, 00:13:19.004 "data_size": 65536 00:13:19.004 }, 00:13:19.004 { 00:13:19.004 "name": "BaseBdev2", 00:13:19.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.004 "is_configured": false, 00:13:19.004 "data_offset": 0, 00:13:19.004 "data_size": 0 00:13:19.004 }, 00:13:19.004 { 00:13:19.004 "name": "BaseBdev3", 00:13:19.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.004 "is_configured": false, 00:13:19.004 "data_offset": 0, 00:13:19.004 "data_size": 0 00:13:19.004 } 00:13:19.004 ] 00:13:19.004 }' 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.004 11:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.572 [2024-11-20 11:25:27.160502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.572 [2024-11-20 11:25:27.160582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.572 [2024-11-20 11:25:27.168554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.572 [2024-11-20 11:25:27.171080] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.572 [2024-11-20 11:25:27.171144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.572 [2024-11-20 11:25:27.171162] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:19.572 [2024-11-20 11:25:27.171178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.572 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.573 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.573 "name": "Existed_Raid", 00:13:19.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.573 "strip_size_kb": 0, 00:13:19.573 "state": "configuring", 00:13:19.573 "raid_level": "raid1", 00:13:19.573 "superblock": false, 00:13:19.573 "num_base_bdevs": 3, 00:13:19.573 "num_base_bdevs_discovered": 1, 00:13:19.573 "num_base_bdevs_operational": 3, 00:13:19.573 "base_bdevs_list": [ 00:13:19.573 { 00:13:19.573 "name": "BaseBdev1", 00:13:19.573 "uuid": "60e2fd89-42ad-41c9-8cf2-20801448d9f0", 00:13:19.573 "is_configured": true, 00:13:19.573 "data_offset": 0, 00:13:19.573 "data_size": 65536 00:13:19.573 }, 00:13:19.573 { 00:13:19.573 "name": "BaseBdev2", 00:13:19.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.573 "is_configured": false, 00:13:19.573 "data_offset": 0, 00:13:19.573 "data_size": 0 00:13:19.573 }, 00:13:19.573 { 00:13:19.573 "name": "BaseBdev3", 00:13:19.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.573 "is_configured": false, 00:13:19.573 "data_offset": 0, 00:13:19.573 "data_size": 0 00:13:19.573 } 00:13:19.573 ] 00:13:19.573 }' 00:13:19.573 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.573 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.831 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.831 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.831 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 [2024-11-20 11:25:27.710767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.090 BaseBdev2 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 [ 00:13:20.090 { 00:13:20.090 "name": "BaseBdev2", 00:13:20.090 "aliases": [ 00:13:20.090 "b0bb6dd7-93fc-49ef-9c8b-091689a79c2f" 00:13:20.090 ], 00:13:20.090 "product_name": "Malloc disk", 00:13:20.090 "block_size": 512, 00:13:20.090 "num_blocks": 65536, 00:13:20.090 "uuid": "b0bb6dd7-93fc-49ef-9c8b-091689a79c2f", 00:13:20.090 "assigned_rate_limits": { 00:13:20.090 "rw_ios_per_sec": 0, 00:13:20.090 "rw_mbytes_per_sec": 0, 00:13:20.090 "r_mbytes_per_sec": 0, 00:13:20.090 "w_mbytes_per_sec": 0 00:13:20.090 }, 00:13:20.090 "claimed": true, 00:13:20.090 "claim_type": "exclusive_write", 00:13:20.090 "zoned": false, 00:13:20.090 "supported_io_types": { 00:13:20.090 "read": true, 00:13:20.090 "write": true, 00:13:20.090 "unmap": true, 00:13:20.090 "flush": true, 00:13:20.090 "reset": true, 00:13:20.090 "nvme_admin": false, 00:13:20.090 "nvme_io": false, 00:13:20.090 "nvme_io_md": false, 00:13:20.090 "write_zeroes": true, 00:13:20.090 "zcopy": true, 00:13:20.090 "get_zone_info": false, 00:13:20.090 "zone_management": false, 00:13:20.090 "zone_append": false, 00:13:20.090 "compare": false, 00:13:20.090 "compare_and_write": false, 00:13:20.090 "abort": true, 00:13:20.090 "seek_hole": false, 00:13:20.090 "seek_data": false, 00:13:20.090 "copy": true, 00:13:20.090 "nvme_iov_md": false 00:13:20.090 }, 00:13:20.090 "memory_domains": [ 00:13:20.090 { 00:13:20.090 "dma_device_id": "system", 00:13:20.090 "dma_device_type": 1 00:13:20.090 }, 00:13:20.090 { 00:13:20.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.090 "dma_device_type": 2 00:13:20.090 } 00:13:20.090 ], 00:13:20.090 "driver_specific": {} 00:13:20.090 } 00:13:20.090 ] 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.090 "name": "Existed_Raid", 00:13:20.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.090 "strip_size_kb": 0, 00:13:20.090 "state": "configuring", 00:13:20.090 "raid_level": "raid1", 00:13:20.090 "superblock": false, 00:13:20.090 "num_base_bdevs": 3, 00:13:20.090 "num_base_bdevs_discovered": 2, 00:13:20.090 "num_base_bdevs_operational": 3, 00:13:20.090 "base_bdevs_list": [ 00:13:20.090 { 00:13:20.090 "name": "BaseBdev1", 00:13:20.090 "uuid": "60e2fd89-42ad-41c9-8cf2-20801448d9f0", 00:13:20.090 "is_configured": true, 00:13:20.090 "data_offset": 0, 00:13:20.090 "data_size": 65536 00:13:20.090 }, 00:13:20.090 { 00:13:20.090 "name": "BaseBdev2", 00:13:20.090 "uuid": "b0bb6dd7-93fc-49ef-9c8b-091689a79c2f", 00:13:20.090 "is_configured": true, 00:13:20.090 "data_offset": 0, 00:13:20.090 "data_size": 65536 00:13:20.090 }, 00:13:20.090 { 00:13:20.090 "name": "BaseBdev3", 00:13:20.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.090 "is_configured": false, 00:13:20.090 "data_offset": 0, 00:13:20.090 "data_size": 0 00:13:20.090 } 00:13:20.090 ] 00:13:20.090 }' 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.090 11:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:20.664 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 [2024-11-20 11:25:28.313606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.664 [2024-11-20 11:25:28.313811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:20.664 [2024-11-20 11:25:28.313842] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:20.664 [2024-11-20 11:25:28.314390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:20.664 [2024-11-20 11:25:28.314757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:20.664 [2024-11-20 11:25:28.314791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:20.664 [2024-11-20 11:25:28.315261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.664 BaseBdev3 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 [ 00:13:20.665 { 00:13:20.665 "name": "BaseBdev3", 00:13:20.665 "aliases": [ 00:13:20.665 "2e1f54c0-9c79-42cf-8c02-b323260eaf60" 00:13:20.665 ], 00:13:20.665 "product_name": "Malloc disk", 00:13:20.665 "block_size": 512, 00:13:20.665 "num_blocks": 65536, 00:13:20.665 "uuid": "2e1f54c0-9c79-42cf-8c02-b323260eaf60", 00:13:20.665 "assigned_rate_limits": { 00:13:20.665 "rw_ios_per_sec": 0, 00:13:20.665 "rw_mbytes_per_sec": 0, 00:13:20.665 "r_mbytes_per_sec": 0, 00:13:20.665 "w_mbytes_per_sec": 0 00:13:20.665 }, 00:13:20.665 "claimed": true, 00:13:20.665 "claim_type": "exclusive_write", 00:13:20.665 "zoned": false, 00:13:20.665 "supported_io_types": { 00:13:20.665 "read": true, 00:13:20.665 "write": true, 00:13:20.665 "unmap": true, 00:13:20.665 "flush": true, 00:13:20.665 "reset": true, 00:13:20.665 "nvme_admin": false, 00:13:20.665 "nvme_io": false, 00:13:20.665 "nvme_io_md": false, 00:13:20.665 "write_zeroes": true, 00:13:20.665 "zcopy": true, 00:13:20.665 "get_zone_info": false, 00:13:20.665 "zone_management": false, 00:13:20.665 "zone_append": false, 00:13:20.665 "compare": false, 00:13:20.665 "compare_and_write": false, 00:13:20.665 "abort": true, 00:13:20.665 "seek_hole": false, 00:13:20.665 "seek_data": false, 00:13:20.665 "copy": true, 00:13:20.665 "nvme_iov_md": false 00:13:20.665 }, 00:13:20.665 "memory_domains": [ 00:13:20.665 { 00:13:20.665 "dma_device_id": "system", 00:13:20.665 "dma_device_type": 1 00:13:20.665 }, 00:13:20.665 { 00:13:20.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.665 "dma_device_type": 2 00:13:20.665 } 00:13:20.665 ], 00:13:20.665 "driver_specific": {} 00:13:20.665 } 00:13:20.665 ] 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.665 "name": "Existed_Raid", 00:13:20.665 "uuid": "4a4853f6-4d80-48a7-8023-2cd1826042ca", 00:13:20.665 "strip_size_kb": 0, 00:13:20.665 "state": "online", 00:13:20.665 "raid_level": "raid1", 00:13:20.665 "superblock": false, 00:13:20.665 "num_base_bdevs": 3, 00:13:20.665 "num_base_bdevs_discovered": 3, 00:13:20.665 "num_base_bdevs_operational": 3, 00:13:20.665 "base_bdevs_list": [ 00:13:20.665 { 00:13:20.665 "name": "BaseBdev1", 00:13:20.665 "uuid": "60e2fd89-42ad-41c9-8cf2-20801448d9f0", 00:13:20.665 "is_configured": true, 00:13:20.665 "data_offset": 0, 00:13:20.665 "data_size": 65536 00:13:20.665 }, 00:13:20.665 { 00:13:20.665 "name": "BaseBdev2", 00:13:20.665 "uuid": "b0bb6dd7-93fc-49ef-9c8b-091689a79c2f", 00:13:20.665 "is_configured": true, 00:13:20.665 "data_offset": 0, 00:13:20.665 "data_size": 65536 00:13:20.665 }, 00:13:20.665 { 00:13:20.665 "name": "BaseBdev3", 00:13:20.665 "uuid": "2e1f54c0-9c79-42cf-8c02-b323260eaf60", 00:13:20.665 "is_configured": true, 00:13:20.665 "data_offset": 0, 00:13:20.665 "data_size": 65536 00:13:20.665 } 00:13:20.665 ] 00:13:20.665 }' 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.665 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.232 [2024-11-20 11:25:28.838399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.232 "name": "Existed_Raid", 00:13:21.232 "aliases": [ 00:13:21.232 "4a4853f6-4d80-48a7-8023-2cd1826042ca" 00:13:21.232 ], 00:13:21.232 "product_name": "Raid Volume", 00:13:21.232 "block_size": 512, 00:13:21.232 "num_blocks": 65536, 00:13:21.232 "uuid": "4a4853f6-4d80-48a7-8023-2cd1826042ca", 00:13:21.232 "assigned_rate_limits": { 00:13:21.232 "rw_ios_per_sec": 0, 00:13:21.232 "rw_mbytes_per_sec": 0, 00:13:21.232 "r_mbytes_per_sec": 0, 00:13:21.232 "w_mbytes_per_sec": 0 00:13:21.232 }, 00:13:21.232 "claimed": false, 00:13:21.232 "zoned": false, 00:13:21.232 "supported_io_types": { 00:13:21.232 "read": true, 00:13:21.232 "write": true, 00:13:21.232 "unmap": false, 00:13:21.232 "flush": false, 00:13:21.232 "reset": true, 00:13:21.232 "nvme_admin": false, 00:13:21.232 "nvme_io": false, 00:13:21.232 "nvme_io_md": false, 00:13:21.232 "write_zeroes": true, 00:13:21.232 "zcopy": false, 00:13:21.232 "get_zone_info": false, 00:13:21.232 "zone_management": false, 00:13:21.232 "zone_append": false, 00:13:21.232 "compare": false, 00:13:21.232 "compare_and_write": false, 00:13:21.232 "abort": false, 00:13:21.232 "seek_hole": false, 00:13:21.232 "seek_data": false, 00:13:21.232 "copy": false, 00:13:21.232 "nvme_iov_md": false 00:13:21.232 }, 00:13:21.232 "memory_domains": [ 00:13:21.232 { 00:13:21.232 "dma_device_id": "system", 00:13:21.232 "dma_device_type": 1 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.232 "dma_device_type": 2 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "dma_device_id": "system", 00:13:21.232 "dma_device_type": 1 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.232 "dma_device_type": 2 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "dma_device_id": "system", 00:13:21.232 "dma_device_type": 1 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.232 "dma_device_type": 2 00:13:21.232 } 00:13:21.232 ], 00:13:21.232 "driver_specific": { 00:13:21.232 "raid": { 00:13:21.232 "uuid": "4a4853f6-4d80-48a7-8023-2cd1826042ca", 00:13:21.232 "strip_size_kb": 0, 00:13:21.232 "state": "online", 00:13:21.232 "raid_level": "raid1", 00:13:21.232 "superblock": false, 00:13:21.232 "num_base_bdevs": 3, 00:13:21.232 "num_base_bdevs_discovered": 3, 00:13:21.232 "num_base_bdevs_operational": 3, 00:13:21.232 "base_bdevs_list": [ 00:13:21.232 { 00:13:21.232 "name": "BaseBdev1", 00:13:21.232 "uuid": "60e2fd89-42ad-41c9-8cf2-20801448d9f0", 00:13:21.232 "is_configured": true, 00:13:21.232 "data_offset": 0, 00:13:21.232 "data_size": 65536 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "name": "BaseBdev2", 00:13:21.232 "uuid": "b0bb6dd7-93fc-49ef-9c8b-091689a79c2f", 00:13:21.232 "is_configured": true, 00:13:21.232 "data_offset": 0, 00:13:21.232 "data_size": 65536 00:13:21.232 }, 00:13:21.232 { 00:13:21.232 "name": "BaseBdev3", 00:13:21.232 "uuid": "2e1f54c0-9c79-42cf-8c02-b323260eaf60", 00:13:21.232 "is_configured": true, 00:13:21.232 "data_offset": 0, 00:13:21.232 "data_size": 65536 00:13:21.232 } 00:13:21.232 ] 00:13:21.232 } 00:13:21.232 } 00:13:21.232 }' 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:21.232 BaseBdev2 00:13:21.232 BaseBdev3' 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.232 11:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.232 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.232 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.232 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.232 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:21.233 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.233 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.233 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.233 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.491 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.491 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.491 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.491 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:21.491 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.491 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.492 [2024-11-20 11:25:29.138149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.492 "name": "Existed_Raid", 00:13:21.492 "uuid": "4a4853f6-4d80-48a7-8023-2cd1826042ca", 00:13:21.492 "strip_size_kb": 0, 00:13:21.492 "state": "online", 00:13:21.492 "raid_level": "raid1", 00:13:21.492 "superblock": false, 00:13:21.492 "num_base_bdevs": 3, 00:13:21.492 "num_base_bdevs_discovered": 2, 00:13:21.492 "num_base_bdevs_operational": 2, 00:13:21.492 "base_bdevs_list": [ 00:13:21.492 { 00:13:21.492 "name": null, 00:13:21.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.492 "is_configured": false, 00:13:21.492 "data_offset": 0, 00:13:21.492 "data_size": 65536 00:13:21.492 }, 00:13:21.492 { 00:13:21.492 "name": "BaseBdev2", 00:13:21.492 "uuid": "b0bb6dd7-93fc-49ef-9c8b-091689a79c2f", 00:13:21.492 "is_configured": true, 00:13:21.492 "data_offset": 0, 00:13:21.492 "data_size": 65536 00:13:21.492 }, 00:13:21.492 { 00:13:21.492 "name": "BaseBdev3", 00:13:21.492 "uuid": "2e1f54c0-9c79-42cf-8c02-b323260eaf60", 00:13:21.492 "is_configured": true, 00:13:21.492 "data_offset": 0, 00:13:21.492 "data_size": 65536 00:13:21.492 } 00:13:21.492 ] 00:13:21.492 }' 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.492 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.059 [2024-11-20 11:25:29.798460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.059 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.318 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.318 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:22.318 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:22.318 11:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:22.318 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.319 11:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.319 [2024-11-20 11:25:29.951788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:22.319 [2024-11-20 11:25:29.952001] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.319 [2024-11-20 11:25:30.048819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.319 [2024-11-20 11:25:30.048929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.319 [2024-11-20 11:25:30.048956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.319 BaseBdev2 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.319 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.579 [ 00:13:22.579 { 00:13:22.579 "name": "BaseBdev2", 00:13:22.579 "aliases": [ 00:13:22.579 "f4f40481-c400-4e03-9029-c457cf7417da" 00:13:22.579 ], 00:13:22.579 "product_name": "Malloc disk", 00:13:22.579 "block_size": 512, 00:13:22.579 "num_blocks": 65536, 00:13:22.579 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:22.579 "assigned_rate_limits": { 00:13:22.579 "rw_ios_per_sec": 0, 00:13:22.579 "rw_mbytes_per_sec": 0, 00:13:22.579 "r_mbytes_per_sec": 0, 00:13:22.579 "w_mbytes_per_sec": 0 00:13:22.579 }, 00:13:22.579 "claimed": false, 00:13:22.579 "zoned": false, 00:13:22.579 "supported_io_types": { 00:13:22.579 "read": true, 00:13:22.579 "write": true, 00:13:22.579 "unmap": true, 00:13:22.579 "flush": true, 00:13:22.579 "reset": true, 00:13:22.579 "nvme_admin": false, 00:13:22.579 "nvme_io": false, 00:13:22.579 "nvme_io_md": false, 00:13:22.579 "write_zeroes": true, 00:13:22.579 "zcopy": true, 00:13:22.579 "get_zone_info": false, 00:13:22.579 "zone_management": false, 00:13:22.579 "zone_append": false, 00:13:22.579 "compare": false, 00:13:22.579 "compare_and_write": false, 00:13:22.579 "abort": true, 00:13:22.579 "seek_hole": false, 00:13:22.579 "seek_data": false, 00:13:22.579 "copy": true, 00:13:22.579 "nvme_iov_md": false 00:13:22.579 }, 00:13:22.579 "memory_domains": [ 00:13:22.579 { 00:13:22.579 "dma_device_id": "system", 00:13:22.579 "dma_device_type": 1 00:13:22.579 }, 00:13:22.579 { 00:13:22.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.579 "dma_device_type": 2 00:13:22.579 } 00:13:22.579 ], 00:13:22.579 "driver_specific": {} 00:13:22.579 } 00:13:22.579 ] 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.579 BaseBdev3 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.579 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.579 [ 00:13:22.579 { 00:13:22.579 "name": "BaseBdev3", 00:13:22.579 "aliases": [ 00:13:22.579 "1c64e638-1691-4fb6-b372-3e612dc656a3" 00:13:22.579 ], 00:13:22.579 "product_name": "Malloc disk", 00:13:22.579 "block_size": 512, 00:13:22.579 "num_blocks": 65536, 00:13:22.579 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:22.579 "assigned_rate_limits": { 00:13:22.579 "rw_ios_per_sec": 0, 00:13:22.579 "rw_mbytes_per_sec": 0, 00:13:22.579 "r_mbytes_per_sec": 0, 00:13:22.579 "w_mbytes_per_sec": 0 00:13:22.579 }, 00:13:22.579 "claimed": false, 00:13:22.579 "zoned": false, 00:13:22.579 "supported_io_types": { 00:13:22.579 "read": true, 00:13:22.579 "write": true, 00:13:22.579 "unmap": true, 00:13:22.579 "flush": true, 00:13:22.579 "reset": true, 00:13:22.579 "nvme_admin": false, 00:13:22.579 "nvme_io": false, 00:13:22.579 "nvme_io_md": false, 00:13:22.579 "write_zeroes": true, 00:13:22.579 "zcopy": true, 00:13:22.579 "get_zone_info": false, 00:13:22.579 "zone_management": false, 00:13:22.580 "zone_append": false, 00:13:22.580 "compare": false, 00:13:22.580 "compare_and_write": false, 00:13:22.580 "abort": true, 00:13:22.580 "seek_hole": false, 00:13:22.580 "seek_data": false, 00:13:22.580 "copy": true, 00:13:22.580 "nvme_iov_md": false 00:13:22.580 }, 00:13:22.580 "memory_domains": [ 00:13:22.580 { 00:13:22.580 "dma_device_id": "system", 00:13:22.580 "dma_device_type": 1 00:13:22.580 }, 00:13:22.580 { 00:13:22.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.580 "dma_device_type": 2 00:13:22.580 } 00:13:22.580 ], 00:13:22.580 "driver_specific": {} 00:13:22.580 } 00:13:22.580 ] 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.580 [2024-11-20 11:25:30.259569] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:22.580 [2024-11-20 11:25:30.259688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:22.580 [2024-11-20 11:25:30.259727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.580 [2024-11-20 11:25:30.262516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.580 "name": "Existed_Raid", 00:13:22.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.580 "strip_size_kb": 0, 00:13:22.580 "state": "configuring", 00:13:22.580 "raid_level": "raid1", 00:13:22.580 "superblock": false, 00:13:22.580 "num_base_bdevs": 3, 00:13:22.580 "num_base_bdevs_discovered": 2, 00:13:22.580 "num_base_bdevs_operational": 3, 00:13:22.580 "base_bdevs_list": [ 00:13:22.580 { 00:13:22.580 "name": "BaseBdev1", 00:13:22.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.580 "is_configured": false, 00:13:22.580 "data_offset": 0, 00:13:22.580 "data_size": 0 00:13:22.580 }, 00:13:22.580 { 00:13:22.580 "name": "BaseBdev2", 00:13:22.580 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:22.580 "is_configured": true, 00:13:22.580 "data_offset": 0, 00:13:22.580 "data_size": 65536 00:13:22.580 }, 00:13:22.580 { 00:13:22.580 "name": "BaseBdev3", 00:13:22.580 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:22.580 "is_configured": true, 00:13:22.580 "data_offset": 0, 00:13:22.580 "data_size": 65536 00:13:22.580 } 00:13:22.580 ] 00:13:22.580 }' 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.580 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.148 [2024-11-20 11:25:30.784727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.148 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.149 "name": "Existed_Raid", 00:13:23.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.149 "strip_size_kb": 0, 00:13:23.149 "state": "configuring", 00:13:23.149 "raid_level": "raid1", 00:13:23.149 "superblock": false, 00:13:23.149 "num_base_bdevs": 3, 00:13:23.149 "num_base_bdevs_discovered": 1, 00:13:23.149 "num_base_bdevs_operational": 3, 00:13:23.149 "base_bdevs_list": [ 00:13:23.149 { 00:13:23.149 "name": "BaseBdev1", 00:13:23.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.149 "is_configured": false, 00:13:23.149 "data_offset": 0, 00:13:23.149 "data_size": 0 00:13:23.149 }, 00:13:23.149 { 00:13:23.149 "name": null, 00:13:23.149 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:23.149 "is_configured": false, 00:13:23.149 "data_offset": 0, 00:13:23.149 "data_size": 65536 00:13:23.149 }, 00:13:23.149 { 00:13:23.149 "name": "BaseBdev3", 00:13:23.149 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:23.149 "is_configured": true, 00:13:23.149 "data_offset": 0, 00:13:23.149 "data_size": 65536 00:13:23.149 } 00:13:23.149 ] 00:13:23.149 }' 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.149 11:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 [2024-11-20 11:25:31.375979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.850 BaseBdev1 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.850 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.850 [ 00:13:23.850 { 00:13:23.850 "name": "BaseBdev1", 00:13:23.850 "aliases": [ 00:13:23.850 "eb5af8b0-5f59-4e7a-8067-3a71f5890029" 00:13:23.850 ], 00:13:23.850 "product_name": "Malloc disk", 00:13:23.850 "block_size": 512, 00:13:23.850 "num_blocks": 65536, 00:13:23.850 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:23.850 "assigned_rate_limits": { 00:13:23.850 "rw_ios_per_sec": 0, 00:13:23.850 "rw_mbytes_per_sec": 0, 00:13:23.850 "r_mbytes_per_sec": 0, 00:13:23.850 "w_mbytes_per_sec": 0 00:13:23.850 }, 00:13:23.850 "claimed": true, 00:13:23.850 "claim_type": "exclusive_write", 00:13:23.850 "zoned": false, 00:13:23.850 "supported_io_types": { 00:13:23.850 "read": true, 00:13:23.850 "write": true, 00:13:23.850 "unmap": true, 00:13:23.850 "flush": true, 00:13:23.850 "reset": true, 00:13:23.850 "nvme_admin": false, 00:13:23.850 "nvme_io": false, 00:13:23.850 "nvme_io_md": false, 00:13:23.850 "write_zeroes": true, 00:13:23.850 "zcopy": true, 00:13:23.850 "get_zone_info": false, 00:13:23.850 "zone_management": false, 00:13:23.851 "zone_append": false, 00:13:23.851 "compare": false, 00:13:23.851 "compare_and_write": false, 00:13:23.851 "abort": true, 00:13:23.851 "seek_hole": false, 00:13:23.851 "seek_data": false, 00:13:23.851 "copy": true, 00:13:23.851 "nvme_iov_md": false 00:13:23.851 }, 00:13:23.851 "memory_domains": [ 00:13:23.851 { 00:13:23.851 "dma_device_id": "system", 00:13:23.851 "dma_device_type": 1 00:13:23.851 }, 00:13:23.851 { 00:13:23.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.851 "dma_device_type": 2 00:13:23.851 } 00:13:23.851 ], 00:13:23.851 "driver_specific": {} 00:13:23.851 } 00:13:23.851 ] 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.851 "name": "Existed_Raid", 00:13:23.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.851 "strip_size_kb": 0, 00:13:23.851 "state": "configuring", 00:13:23.851 "raid_level": "raid1", 00:13:23.851 "superblock": false, 00:13:23.851 "num_base_bdevs": 3, 00:13:23.851 "num_base_bdevs_discovered": 2, 00:13:23.851 "num_base_bdevs_operational": 3, 00:13:23.851 "base_bdevs_list": [ 00:13:23.851 { 00:13:23.851 "name": "BaseBdev1", 00:13:23.851 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:23.851 "is_configured": true, 00:13:23.851 "data_offset": 0, 00:13:23.851 "data_size": 65536 00:13:23.851 }, 00:13:23.851 { 00:13:23.851 "name": null, 00:13:23.851 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:23.851 "is_configured": false, 00:13:23.851 "data_offset": 0, 00:13:23.851 "data_size": 65536 00:13:23.851 }, 00:13:23.851 { 00:13:23.851 "name": "BaseBdev3", 00:13:23.851 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:23.851 "is_configured": true, 00:13:23.851 "data_offset": 0, 00:13:23.851 "data_size": 65536 00:13:23.851 } 00:13:23.851 ] 00:13:23.851 }' 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.851 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.434 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.434 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.434 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.434 11:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.434 11:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.434 [2024-11-20 11:25:32.020495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.434 "name": "Existed_Raid", 00:13:24.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.434 "strip_size_kb": 0, 00:13:24.434 "state": "configuring", 00:13:24.434 "raid_level": "raid1", 00:13:24.434 "superblock": false, 00:13:24.434 "num_base_bdevs": 3, 00:13:24.434 "num_base_bdevs_discovered": 1, 00:13:24.434 "num_base_bdevs_operational": 3, 00:13:24.434 "base_bdevs_list": [ 00:13:24.434 { 00:13:24.434 "name": "BaseBdev1", 00:13:24.434 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:24.434 "is_configured": true, 00:13:24.434 "data_offset": 0, 00:13:24.434 "data_size": 65536 00:13:24.434 }, 00:13:24.434 { 00:13:24.434 "name": null, 00:13:24.434 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:24.434 "is_configured": false, 00:13:24.434 "data_offset": 0, 00:13:24.434 "data_size": 65536 00:13:24.434 }, 00:13:24.434 { 00:13:24.434 "name": null, 00:13:24.434 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:24.434 "is_configured": false, 00:13:24.434 "data_offset": 0, 00:13:24.434 "data_size": 65536 00:13:24.434 } 00:13:24.434 ] 00:13:24.434 }' 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.434 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.693 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.693 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:24.693 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.693 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.957 [2024-11-20 11:25:32.596731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.957 "name": "Existed_Raid", 00:13:24.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.957 "strip_size_kb": 0, 00:13:24.957 "state": "configuring", 00:13:24.957 "raid_level": "raid1", 00:13:24.957 "superblock": false, 00:13:24.957 "num_base_bdevs": 3, 00:13:24.957 "num_base_bdevs_discovered": 2, 00:13:24.957 "num_base_bdevs_operational": 3, 00:13:24.957 "base_bdevs_list": [ 00:13:24.957 { 00:13:24.957 "name": "BaseBdev1", 00:13:24.957 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:24.957 "is_configured": true, 00:13:24.957 "data_offset": 0, 00:13:24.957 "data_size": 65536 00:13:24.957 }, 00:13:24.957 { 00:13:24.957 "name": null, 00:13:24.957 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:24.957 "is_configured": false, 00:13:24.957 "data_offset": 0, 00:13:24.957 "data_size": 65536 00:13:24.957 }, 00:13:24.957 { 00:13:24.957 "name": "BaseBdev3", 00:13:24.957 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:24.957 "is_configured": true, 00:13:24.957 "data_offset": 0, 00:13:24.957 "data_size": 65536 00:13:24.957 } 00:13:24.957 ] 00:13:24.957 }' 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.957 11:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.527 [2024-11-20 11:25:33.172897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.527 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.528 "name": "Existed_Raid", 00:13:25.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.528 "strip_size_kb": 0, 00:13:25.528 "state": "configuring", 00:13:25.528 "raid_level": "raid1", 00:13:25.528 "superblock": false, 00:13:25.528 "num_base_bdevs": 3, 00:13:25.528 "num_base_bdevs_discovered": 1, 00:13:25.528 "num_base_bdevs_operational": 3, 00:13:25.528 "base_bdevs_list": [ 00:13:25.528 { 00:13:25.528 "name": null, 00:13:25.528 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:25.528 "is_configured": false, 00:13:25.528 "data_offset": 0, 00:13:25.528 "data_size": 65536 00:13:25.528 }, 00:13:25.528 { 00:13:25.528 "name": null, 00:13:25.528 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:25.528 "is_configured": false, 00:13:25.528 "data_offset": 0, 00:13:25.528 "data_size": 65536 00:13:25.528 }, 00:13:25.528 { 00:13:25.528 "name": "BaseBdev3", 00:13:25.528 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:25.528 "is_configured": true, 00:13:25.528 "data_offset": 0, 00:13:25.528 "data_size": 65536 00:13:25.528 } 00:13:25.528 ] 00:13:25.528 }' 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.528 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.095 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.095 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.095 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.096 [2024-11-20 11:25:33.848455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.096 "name": "Existed_Raid", 00:13:26.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.096 "strip_size_kb": 0, 00:13:26.096 "state": "configuring", 00:13:26.096 "raid_level": "raid1", 00:13:26.096 "superblock": false, 00:13:26.096 "num_base_bdevs": 3, 00:13:26.096 "num_base_bdevs_discovered": 2, 00:13:26.096 "num_base_bdevs_operational": 3, 00:13:26.096 "base_bdevs_list": [ 00:13:26.096 { 00:13:26.096 "name": null, 00:13:26.096 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:26.096 "is_configured": false, 00:13:26.096 "data_offset": 0, 00:13:26.096 "data_size": 65536 00:13:26.096 }, 00:13:26.096 { 00:13:26.096 "name": "BaseBdev2", 00:13:26.096 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:26.096 "is_configured": true, 00:13:26.096 "data_offset": 0, 00:13:26.096 "data_size": 65536 00:13:26.096 }, 00:13:26.096 { 00:13:26.096 "name": "BaseBdev3", 00:13:26.096 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:26.096 "is_configured": true, 00:13:26.096 "data_offset": 0, 00:13:26.096 "data_size": 65536 00:13:26.096 } 00:13:26.096 ] 00:13:26.096 }' 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.096 11:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.664 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.664 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.664 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.664 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:26.664 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.664 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:26.664 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u eb5af8b0-5f59-4e7a-8067-3a71f5890029 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.665 [2024-11-20 11:25:34.474686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:26.665 [2024-11-20 11:25:34.474756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:26.665 [2024-11-20 11:25:34.474770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:26.665 [2024-11-20 11:25:34.475086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:26.665 [2024-11-20 11:25:34.475321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:26.665 [2024-11-20 11:25:34.475354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:26.665 [2024-11-20 11:25:34.475675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.665 NewBaseBdev 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.665 [ 00:13:26.665 { 00:13:26.665 "name": "NewBaseBdev", 00:13:26.665 "aliases": [ 00:13:26.665 "eb5af8b0-5f59-4e7a-8067-3a71f5890029" 00:13:26.665 ], 00:13:26.665 "product_name": "Malloc disk", 00:13:26.665 "block_size": 512, 00:13:26.665 "num_blocks": 65536, 00:13:26.665 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:26.665 "assigned_rate_limits": { 00:13:26.665 "rw_ios_per_sec": 0, 00:13:26.665 "rw_mbytes_per_sec": 0, 00:13:26.665 "r_mbytes_per_sec": 0, 00:13:26.665 "w_mbytes_per_sec": 0 00:13:26.665 }, 00:13:26.665 "claimed": true, 00:13:26.665 "claim_type": "exclusive_write", 00:13:26.665 "zoned": false, 00:13:26.665 "supported_io_types": { 00:13:26.665 "read": true, 00:13:26.665 "write": true, 00:13:26.665 "unmap": true, 00:13:26.665 "flush": true, 00:13:26.665 "reset": true, 00:13:26.665 "nvme_admin": false, 00:13:26.665 "nvme_io": false, 00:13:26.665 "nvme_io_md": false, 00:13:26.665 "write_zeroes": true, 00:13:26.665 "zcopy": true, 00:13:26.665 "get_zone_info": false, 00:13:26.665 "zone_management": false, 00:13:26.665 "zone_append": false, 00:13:26.665 "compare": false, 00:13:26.665 "compare_and_write": false, 00:13:26.665 "abort": true, 00:13:26.665 "seek_hole": false, 00:13:26.665 "seek_data": false, 00:13:26.665 "copy": true, 00:13:26.665 "nvme_iov_md": false 00:13:26.665 }, 00:13:26.665 "memory_domains": [ 00:13:26.665 { 00:13:26.665 "dma_device_id": "system", 00:13:26.665 "dma_device_type": 1 00:13:26.665 }, 00:13:26.665 { 00:13:26.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.665 "dma_device_type": 2 00:13:26.665 } 00:13:26.665 ], 00:13:26.665 "driver_specific": {} 00:13:26.665 } 00:13:26.665 ] 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.665 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.924 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.924 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.925 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.925 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.925 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.925 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.925 "name": "Existed_Raid", 00:13:26.925 "uuid": "df0ac4b9-db21-4c1e-ab4d-9d19ba5a9121", 00:13:26.925 "strip_size_kb": 0, 00:13:26.925 "state": "online", 00:13:26.925 "raid_level": "raid1", 00:13:26.925 "superblock": false, 00:13:26.925 "num_base_bdevs": 3, 00:13:26.925 "num_base_bdevs_discovered": 3, 00:13:26.925 "num_base_bdevs_operational": 3, 00:13:26.925 "base_bdevs_list": [ 00:13:26.925 { 00:13:26.925 "name": "NewBaseBdev", 00:13:26.925 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:26.925 "is_configured": true, 00:13:26.925 "data_offset": 0, 00:13:26.925 "data_size": 65536 00:13:26.925 }, 00:13:26.925 { 00:13:26.925 "name": "BaseBdev2", 00:13:26.925 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:26.925 "is_configured": true, 00:13:26.925 "data_offset": 0, 00:13:26.925 "data_size": 65536 00:13:26.925 }, 00:13:26.925 { 00:13:26.925 "name": "BaseBdev3", 00:13:26.925 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:26.925 "is_configured": true, 00:13:26.925 "data_offset": 0, 00:13:26.925 "data_size": 65536 00:13:26.925 } 00:13:26.925 ] 00:13:26.925 }' 00:13:26.925 11:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.925 11:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:27.184 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.444 [2024-11-20 11:25:35.031331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.444 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.444 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:27.444 "name": "Existed_Raid", 00:13:27.444 "aliases": [ 00:13:27.444 "df0ac4b9-db21-4c1e-ab4d-9d19ba5a9121" 00:13:27.444 ], 00:13:27.444 "product_name": "Raid Volume", 00:13:27.444 "block_size": 512, 00:13:27.444 "num_blocks": 65536, 00:13:27.444 "uuid": "df0ac4b9-db21-4c1e-ab4d-9d19ba5a9121", 00:13:27.444 "assigned_rate_limits": { 00:13:27.444 "rw_ios_per_sec": 0, 00:13:27.444 "rw_mbytes_per_sec": 0, 00:13:27.444 "r_mbytes_per_sec": 0, 00:13:27.444 "w_mbytes_per_sec": 0 00:13:27.444 }, 00:13:27.444 "claimed": false, 00:13:27.444 "zoned": false, 00:13:27.444 "supported_io_types": { 00:13:27.444 "read": true, 00:13:27.444 "write": true, 00:13:27.444 "unmap": false, 00:13:27.444 "flush": false, 00:13:27.444 "reset": true, 00:13:27.444 "nvme_admin": false, 00:13:27.444 "nvme_io": false, 00:13:27.444 "nvme_io_md": false, 00:13:27.444 "write_zeroes": true, 00:13:27.444 "zcopy": false, 00:13:27.444 "get_zone_info": false, 00:13:27.444 "zone_management": false, 00:13:27.444 "zone_append": false, 00:13:27.444 "compare": false, 00:13:27.444 "compare_and_write": false, 00:13:27.444 "abort": false, 00:13:27.444 "seek_hole": false, 00:13:27.444 "seek_data": false, 00:13:27.444 "copy": false, 00:13:27.445 "nvme_iov_md": false 00:13:27.445 }, 00:13:27.445 "memory_domains": [ 00:13:27.445 { 00:13:27.445 "dma_device_id": "system", 00:13:27.445 "dma_device_type": 1 00:13:27.445 }, 00:13:27.445 { 00:13:27.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.445 "dma_device_type": 2 00:13:27.445 }, 00:13:27.445 { 00:13:27.445 "dma_device_id": "system", 00:13:27.445 "dma_device_type": 1 00:13:27.445 }, 00:13:27.445 { 00:13:27.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.445 "dma_device_type": 2 00:13:27.445 }, 00:13:27.445 { 00:13:27.445 "dma_device_id": "system", 00:13:27.445 "dma_device_type": 1 00:13:27.445 }, 00:13:27.445 { 00:13:27.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.445 "dma_device_type": 2 00:13:27.445 } 00:13:27.445 ], 00:13:27.445 "driver_specific": { 00:13:27.445 "raid": { 00:13:27.445 "uuid": "df0ac4b9-db21-4c1e-ab4d-9d19ba5a9121", 00:13:27.445 "strip_size_kb": 0, 00:13:27.445 "state": "online", 00:13:27.445 "raid_level": "raid1", 00:13:27.445 "superblock": false, 00:13:27.445 "num_base_bdevs": 3, 00:13:27.445 "num_base_bdevs_discovered": 3, 00:13:27.445 "num_base_bdevs_operational": 3, 00:13:27.445 "base_bdevs_list": [ 00:13:27.445 { 00:13:27.445 "name": "NewBaseBdev", 00:13:27.445 "uuid": "eb5af8b0-5f59-4e7a-8067-3a71f5890029", 00:13:27.445 "is_configured": true, 00:13:27.445 "data_offset": 0, 00:13:27.445 "data_size": 65536 00:13:27.445 }, 00:13:27.445 { 00:13:27.445 "name": "BaseBdev2", 00:13:27.445 "uuid": "f4f40481-c400-4e03-9029-c457cf7417da", 00:13:27.445 "is_configured": true, 00:13:27.445 "data_offset": 0, 00:13:27.445 "data_size": 65536 00:13:27.445 }, 00:13:27.445 { 00:13:27.445 "name": "BaseBdev3", 00:13:27.445 "uuid": "1c64e638-1691-4fb6-b372-3e612dc656a3", 00:13:27.445 "is_configured": true, 00:13:27.445 "data_offset": 0, 00:13:27.445 "data_size": 65536 00:13:27.445 } 00:13:27.445 ] 00:13:27.445 } 00:13:27.445 } 00:13:27.445 }' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:27.445 BaseBdev2 00:13:27.445 BaseBdev3' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.445 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.704 [2024-11-20 11:25:35.332127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.704 [2024-11-20 11:25:35.332188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.704 [2024-11-20 11:25:35.332279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.704 [2024-11-20 11:25:35.332690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.704 [2024-11-20 11:25:35.332728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67368 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67368 ']' 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67368 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67368 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.704 killing process with pid 67368 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67368' 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67368 00:13:27.704 [2024-11-20 11:25:35.368821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.704 11:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67368 00:13:27.962 [2024-11-20 11:25:35.647235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:28.900 00:13:28.900 real 0m11.762s 00:13:28.900 user 0m19.483s 00:13:28.900 sys 0m1.595s 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.900 ************************************ 00:13:28.900 END TEST raid_state_function_test 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.900 ************************************ 00:13:28.900 11:25:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:28.900 11:25:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:28.900 11:25:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.900 11:25:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.900 ************************************ 00:13:28.900 START TEST raid_state_function_test_sb 00:13:28.900 ************************************ 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:28.900 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68000 00:13:29.165 Process raid pid: 68000 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68000' 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68000 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68000 ']' 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.165 11:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.165 [2024-11-20 11:25:36.858674] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:29.165 [2024-11-20 11:25:36.858852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.427 [2024-11-20 11:25:37.048589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.427 [2024-11-20 11:25:37.202498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.685 [2024-11-20 11:25:37.414873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.685 [2024-11-20 11:25:37.414927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.253 [2024-11-20 11:25:37.915011] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.253 [2024-11-20 11:25:37.915086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.253 [2024-11-20 11:25:37.915104] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.253 [2024-11-20 11:25:37.915120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.253 [2024-11-20 11:25:37.915131] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.253 [2024-11-20 11:25:37.915146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.253 "name": "Existed_Raid", 00:13:30.253 "uuid": "868c0981-65a1-4abf-b32e-d7a08cbd9c0c", 00:13:30.253 "strip_size_kb": 0, 00:13:30.253 "state": "configuring", 00:13:30.253 "raid_level": "raid1", 00:13:30.253 "superblock": true, 00:13:30.253 "num_base_bdevs": 3, 00:13:30.253 "num_base_bdevs_discovered": 0, 00:13:30.253 "num_base_bdevs_operational": 3, 00:13:30.253 "base_bdevs_list": [ 00:13:30.253 { 00:13:30.253 "name": "BaseBdev1", 00:13:30.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.253 "is_configured": false, 00:13:30.253 "data_offset": 0, 00:13:30.253 "data_size": 0 00:13:30.253 }, 00:13:30.253 { 00:13:30.253 "name": "BaseBdev2", 00:13:30.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.253 "is_configured": false, 00:13:30.253 "data_offset": 0, 00:13:30.253 "data_size": 0 00:13:30.253 }, 00:13:30.253 { 00:13:30.253 "name": "BaseBdev3", 00:13:30.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.253 "is_configured": false, 00:13:30.253 "data_offset": 0, 00:13:30.253 "data_size": 0 00:13:30.253 } 00:13:30.253 ] 00:13:30.253 }' 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.253 11:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.821 [2024-11-20 11:25:38.411047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.821 [2024-11-20 11:25:38.411095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.821 [2024-11-20 11:25:38.419035] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.821 [2024-11-20 11:25:38.419093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.821 [2024-11-20 11:25:38.419109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.821 [2024-11-20 11:25:38.419126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.821 [2024-11-20 11:25:38.419136] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.821 [2024-11-20 11:25:38.419151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.821 [2024-11-20 11:25:38.463921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.821 BaseBdev1 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:30.821 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.822 [ 00:13:30.822 { 00:13:30.822 "name": "BaseBdev1", 00:13:30.822 "aliases": [ 00:13:30.822 "90d85de4-3539-4214-8d62-a885cdbd919f" 00:13:30.822 ], 00:13:30.822 "product_name": "Malloc disk", 00:13:30.822 "block_size": 512, 00:13:30.822 "num_blocks": 65536, 00:13:30.822 "uuid": "90d85de4-3539-4214-8d62-a885cdbd919f", 00:13:30.822 "assigned_rate_limits": { 00:13:30.822 "rw_ios_per_sec": 0, 00:13:30.822 "rw_mbytes_per_sec": 0, 00:13:30.822 "r_mbytes_per_sec": 0, 00:13:30.822 "w_mbytes_per_sec": 0 00:13:30.822 }, 00:13:30.822 "claimed": true, 00:13:30.822 "claim_type": "exclusive_write", 00:13:30.822 "zoned": false, 00:13:30.822 "supported_io_types": { 00:13:30.822 "read": true, 00:13:30.822 "write": true, 00:13:30.822 "unmap": true, 00:13:30.822 "flush": true, 00:13:30.822 "reset": true, 00:13:30.822 "nvme_admin": false, 00:13:30.822 "nvme_io": false, 00:13:30.822 "nvme_io_md": false, 00:13:30.822 "write_zeroes": true, 00:13:30.822 "zcopy": true, 00:13:30.822 "get_zone_info": false, 00:13:30.822 "zone_management": false, 00:13:30.822 "zone_append": false, 00:13:30.822 "compare": false, 00:13:30.822 "compare_and_write": false, 00:13:30.822 "abort": true, 00:13:30.822 "seek_hole": false, 00:13:30.822 "seek_data": false, 00:13:30.822 "copy": true, 00:13:30.822 "nvme_iov_md": false 00:13:30.822 }, 00:13:30.822 "memory_domains": [ 00:13:30.822 { 00:13:30.822 "dma_device_id": "system", 00:13:30.822 "dma_device_type": 1 00:13:30.822 }, 00:13:30.822 { 00:13:30.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.822 "dma_device_type": 2 00:13:30.822 } 00:13:30.822 ], 00:13:30.822 "driver_specific": {} 00:13:30.822 } 00:13:30.822 ] 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.822 "name": "Existed_Raid", 00:13:30.822 "uuid": "d1058818-7363-46e3-a7b2-3b967bbcd100", 00:13:30.822 "strip_size_kb": 0, 00:13:30.822 "state": "configuring", 00:13:30.822 "raid_level": "raid1", 00:13:30.822 "superblock": true, 00:13:30.822 "num_base_bdevs": 3, 00:13:30.822 "num_base_bdevs_discovered": 1, 00:13:30.822 "num_base_bdevs_operational": 3, 00:13:30.822 "base_bdevs_list": [ 00:13:30.822 { 00:13:30.822 "name": "BaseBdev1", 00:13:30.822 "uuid": "90d85de4-3539-4214-8d62-a885cdbd919f", 00:13:30.822 "is_configured": true, 00:13:30.822 "data_offset": 2048, 00:13:30.822 "data_size": 63488 00:13:30.822 }, 00:13:30.822 { 00:13:30.822 "name": "BaseBdev2", 00:13:30.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.822 "is_configured": false, 00:13:30.822 "data_offset": 0, 00:13:30.822 "data_size": 0 00:13:30.822 }, 00:13:30.822 { 00:13:30.822 "name": "BaseBdev3", 00:13:30.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.822 "is_configured": false, 00:13:30.822 "data_offset": 0, 00:13:30.822 "data_size": 0 00:13:30.822 } 00:13:30.822 ] 00:13:30.822 }' 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.822 11:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.391 [2024-11-20 11:25:39.008128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.391 [2024-11-20 11:25:39.008195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.391 [2024-11-20 11:25:39.016180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.391 [2024-11-20 11:25:39.018753] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.391 [2024-11-20 11:25:39.018805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.391 [2024-11-20 11:25:39.018822] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:31.391 [2024-11-20 11:25:39.018838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.391 "name": "Existed_Raid", 00:13:31.391 "uuid": "4f76f520-05c1-44ef-a7a8-2b7153e810dd", 00:13:31.391 "strip_size_kb": 0, 00:13:31.391 "state": "configuring", 00:13:31.391 "raid_level": "raid1", 00:13:31.391 "superblock": true, 00:13:31.391 "num_base_bdevs": 3, 00:13:31.391 "num_base_bdevs_discovered": 1, 00:13:31.391 "num_base_bdevs_operational": 3, 00:13:31.391 "base_bdevs_list": [ 00:13:31.391 { 00:13:31.391 "name": "BaseBdev1", 00:13:31.391 "uuid": "90d85de4-3539-4214-8d62-a885cdbd919f", 00:13:31.391 "is_configured": true, 00:13:31.391 "data_offset": 2048, 00:13:31.391 "data_size": 63488 00:13:31.391 }, 00:13:31.391 { 00:13:31.391 "name": "BaseBdev2", 00:13:31.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.391 "is_configured": false, 00:13:31.391 "data_offset": 0, 00:13:31.391 "data_size": 0 00:13:31.391 }, 00:13:31.391 { 00:13:31.391 "name": "BaseBdev3", 00:13:31.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.391 "is_configured": false, 00:13:31.391 "data_offset": 0, 00:13:31.391 "data_size": 0 00:13:31.391 } 00:13:31.391 ] 00:13:31.391 }' 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.391 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.957 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:31.957 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.957 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.957 [2024-11-20 11:25:39.571568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.957 BaseBdev2 00:13:31.957 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.957 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.958 [ 00:13:31.958 { 00:13:31.958 "name": "BaseBdev2", 00:13:31.958 "aliases": [ 00:13:31.958 "c218fb97-e95e-4a55-a93b-aa149bdcbb12" 00:13:31.958 ], 00:13:31.958 "product_name": "Malloc disk", 00:13:31.958 "block_size": 512, 00:13:31.958 "num_blocks": 65536, 00:13:31.958 "uuid": "c218fb97-e95e-4a55-a93b-aa149bdcbb12", 00:13:31.958 "assigned_rate_limits": { 00:13:31.958 "rw_ios_per_sec": 0, 00:13:31.958 "rw_mbytes_per_sec": 0, 00:13:31.958 "r_mbytes_per_sec": 0, 00:13:31.958 "w_mbytes_per_sec": 0 00:13:31.958 }, 00:13:31.958 "claimed": true, 00:13:31.958 "claim_type": "exclusive_write", 00:13:31.958 "zoned": false, 00:13:31.958 "supported_io_types": { 00:13:31.958 "read": true, 00:13:31.958 "write": true, 00:13:31.958 "unmap": true, 00:13:31.958 "flush": true, 00:13:31.958 "reset": true, 00:13:31.958 "nvme_admin": false, 00:13:31.958 "nvme_io": false, 00:13:31.958 "nvme_io_md": false, 00:13:31.958 "write_zeroes": true, 00:13:31.958 "zcopy": true, 00:13:31.958 "get_zone_info": false, 00:13:31.958 "zone_management": false, 00:13:31.958 "zone_append": false, 00:13:31.958 "compare": false, 00:13:31.958 "compare_and_write": false, 00:13:31.958 "abort": true, 00:13:31.958 "seek_hole": false, 00:13:31.958 "seek_data": false, 00:13:31.958 "copy": true, 00:13:31.958 "nvme_iov_md": false 00:13:31.958 }, 00:13:31.958 "memory_domains": [ 00:13:31.958 { 00:13:31.958 "dma_device_id": "system", 00:13:31.958 "dma_device_type": 1 00:13:31.958 }, 00:13:31.958 { 00:13:31.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.958 "dma_device_type": 2 00:13:31.958 } 00:13:31.958 ], 00:13:31.958 "driver_specific": {} 00:13:31.958 } 00:13:31.958 ] 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.958 "name": "Existed_Raid", 00:13:31.958 "uuid": "4f76f520-05c1-44ef-a7a8-2b7153e810dd", 00:13:31.958 "strip_size_kb": 0, 00:13:31.958 "state": "configuring", 00:13:31.958 "raid_level": "raid1", 00:13:31.958 "superblock": true, 00:13:31.958 "num_base_bdevs": 3, 00:13:31.958 "num_base_bdevs_discovered": 2, 00:13:31.958 "num_base_bdevs_operational": 3, 00:13:31.958 "base_bdevs_list": [ 00:13:31.958 { 00:13:31.958 "name": "BaseBdev1", 00:13:31.958 "uuid": "90d85de4-3539-4214-8d62-a885cdbd919f", 00:13:31.958 "is_configured": true, 00:13:31.958 "data_offset": 2048, 00:13:31.958 "data_size": 63488 00:13:31.958 }, 00:13:31.958 { 00:13:31.958 "name": "BaseBdev2", 00:13:31.958 "uuid": "c218fb97-e95e-4a55-a93b-aa149bdcbb12", 00:13:31.958 "is_configured": true, 00:13:31.958 "data_offset": 2048, 00:13:31.958 "data_size": 63488 00:13:31.958 }, 00:13:31.958 { 00:13:31.958 "name": "BaseBdev3", 00:13:31.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.958 "is_configured": false, 00:13:31.958 "data_offset": 0, 00:13:31.958 "data_size": 0 00:13:31.958 } 00:13:31.958 ] 00:13:31.958 }' 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.958 11:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.526 [2024-11-20 11:25:40.156424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.526 [2024-11-20 11:25:40.156815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:32.526 [2024-11-20 11:25:40.156849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.526 [2024-11-20 11:25:40.157214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:32.526 BaseBdev3 00:13:32.526 [2024-11-20 11:25:40.157432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:32.526 [2024-11-20 11:25:40.157450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:32.526 [2024-11-20 11:25:40.157673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.526 [ 00:13:32.526 { 00:13:32.526 "name": "BaseBdev3", 00:13:32.526 "aliases": [ 00:13:32.526 "4850d69e-32e9-47de-bf0b-a5d6c01074cc" 00:13:32.526 ], 00:13:32.526 "product_name": "Malloc disk", 00:13:32.526 "block_size": 512, 00:13:32.526 "num_blocks": 65536, 00:13:32.526 "uuid": "4850d69e-32e9-47de-bf0b-a5d6c01074cc", 00:13:32.526 "assigned_rate_limits": { 00:13:32.526 "rw_ios_per_sec": 0, 00:13:32.526 "rw_mbytes_per_sec": 0, 00:13:32.526 "r_mbytes_per_sec": 0, 00:13:32.526 "w_mbytes_per_sec": 0 00:13:32.526 }, 00:13:32.526 "claimed": true, 00:13:32.526 "claim_type": "exclusive_write", 00:13:32.526 "zoned": false, 00:13:32.526 "supported_io_types": { 00:13:32.526 "read": true, 00:13:32.526 "write": true, 00:13:32.526 "unmap": true, 00:13:32.526 "flush": true, 00:13:32.526 "reset": true, 00:13:32.526 "nvme_admin": false, 00:13:32.526 "nvme_io": false, 00:13:32.526 "nvme_io_md": false, 00:13:32.526 "write_zeroes": true, 00:13:32.526 "zcopy": true, 00:13:32.526 "get_zone_info": false, 00:13:32.526 "zone_management": false, 00:13:32.526 "zone_append": false, 00:13:32.526 "compare": false, 00:13:32.526 "compare_and_write": false, 00:13:32.526 "abort": true, 00:13:32.526 "seek_hole": false, 00:13:32.526 "seek_data": false, 00:13:32.526 "copy": true, 00:13:32.526 "nvme_iov_md": false 00:13:32.526 }, 00:13:32.526 "memory_domains": [ 00:13:32.526 { 00:13:32.526 "dma_device_id": "system", 00:13:32.526 "dma_device_type": 1 00:13:32.526 }, 00:13:32.526 { 00:13:32.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.526 "dma_device_type": 2 00:13:32.526 } 00:13:32.526 ], 00:13:32.526 "driver_specific": {} 00:13:32.526 } 00:13:32.526 ] 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.526 "name": "Existed_Raid", 00:13:32.526 "uuid": "4f76f520-05c1-44ef-a7a8-2b7153e810dd", 00:13:32.526 "strip_size_kb": 0, 00:13:32.526 "state": "online", 00:13:32.526 "raid_level": "raid1", 00:13:32.526 "superblock": true, 00:13:32.526 "num_base_bdevs": 3, 00:13:32.526 "num_base_bdevs_discovered": 3, 00:13:32.526 "num_base_bdevs_operational": 3, 00:13:32.526 "base_bdevs_list": [ 00:13:32.526 { 00:13:32.526 "name": "BaseBdev1", 00:13:32.526 "uuid": "90d85de4-3539-4214-8d62-a885cdbd919f", 00:13:32.526 "is_configured": true, 00:13:32.526 "data_offset": 2048, 00:13:32.526 "data_size": 63488 00:13:32.526 }, 00:13:32.526 { 00:13:32.526 "name": "BaseBdev2", 00:13:32.526 "uuid": "c218fb97-e95e-4a55-a93b-aa149bdcbb12", 00:13:32.526 "is_configured": true, 00:13:32.526 "data_offset": 2048, 00:13:32.526 "data_size": 63488 00:13:32.526 }, 00:13:32.526 { 00:13:32.526 "name": "BaseBdev3", 00:13:32.526 "uuid": "4850d69e-32e9-47de-bf0b-a5d6c01074cc", 00:13:32.526 "is_configured": true, 00:13:32.526 "data_offset": 2048, 00:13:32.526 "data_size": 63488 00:13:32.526 } 00:13:32.526 ] 00:13:32.526 }' 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.526 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.093 [2024-11-20 11:25:40.681127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.093 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:33.093 "name": "Existed_Raid", 00:13:33.093 "aliases": [ 00:13:33.093 "4f76f520-05c1-44ef-a7a8-2b7153e810dd" 00:13:33.093 ], 00:13:33.093 "product_name": "Raid Volume", 00:13:33.093 "block_size": 512, 00:13:33.093 "num_blocks": 63488, 00:13:33.093 "uuid": "4f76f520-05c1-44ef-a7a8-2b7153e810dd", 00:13:33.093 "assigned_rate_limits": { 00:13:33.093 "rw_ios_per_sec": 0, 00:13:33.093 "rw_mbytes_per_sec": 0, 00:13:33.093 "r_mbytes_per_sec": 0, 00:13:33.093 "w_mbytes_per_sec": 0 00:13:33.093 }, 00:13:33.093 "claimed": false, 00:13:33.093 "zoned": false, 00:13:33.093 "supported_io_types": { 00:13:33.093 "read": true, 00:13:33.093 "write": true, 00:13:33.093 "unmap": false, 00:13:33.093 "flush": false, 00:13:33.093 "reset": true, 00:13:33.093 "nvme_admin": false, 00:13:33.093 "nvme_io": false, 00:13:33.093 "nvme_io_md": false, 00:13:33.093 "write_zeroes": true, 00:13:33.093 "zcopy": false, 00:13:33.093 "get_zone_info": false, 00:13:33.093 "zone_management": false, 00:13:33.093 "zone_append": false, 00:13:33.093 "compare": false, 00:13:33.093 "compare_and_write": false, 00:13:33.093 "abort": false, 00:13:33.093 "seek_hole": false, 00:13:33.093 "seek_data": false, 00:13:33.093 "copy": false, 00:13:33.093 "nvme_iov_md": false 00:13:33.093 }, 00:13:33.094 "memory_domains": [ 00:13:33.094 { 00:13:33.094 "dma_device_id": "system", 00:13:33.094 "dma_device_type": 1 00:13:33.094 }, 00:13:33.094 { 00:13:33.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.094 "dma_device_type": 2 00:13:33.094 }, 00:13:33.094 { 00:13:33.094 "dma_device_id": "system", 00:13:33.094 "dma_device_type": 1 00:13:33.094 }, 00:13:33.094 { 00:13:33.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.094 "dma_device_type": 2 00:13:33.094 }, 00:13:33.094 { 00:13:33.094 "dma_device_id": "system", 00:13:33.094 "dma_device_type": 1 00:13:33.094 }, 00:13:33.094 { 00:13:33.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.094 "dma_device_type": 2 00:13:33.094 } 00:13:33.094 ], 00:13:33.094 "driver_specific": { 00:13:33.094 "raid": { 00:13:33.094 "uuid": "4f76f520-05c1-44ef-a7a8-2b7153e810dd", 00:13:33.094 "strip_size_kb": 0, 00:13:33.094 "state": "online", 00:13:33.094 "raid_level": "raid1", 00:13:33.094 "superblock": true, 00:13:33.094 "num_base_bdevs": 3, 00:13:33.094 "num_base_bdevs_discovered": 3, 00:13:33.094 "num_base_bdevs_operational": 3, 00:13:33.094 "base_bdevs_list": [ 00:13:33.094 { 00:13:33.094 "name": "BaseBdev1", 00:13:33.094 "uuid": "90d85de4-3539-4214-8d62-a885cdbd919f", 00:13:33.094 "is_configured": true, 00:13:33.094 "data_offset": 2048, 00:13:33.094 "data_size": 63488 00:13:33.094 }, 00:13:33.094 { 00:13:33.094 "name": "BaseBdev2", 00:13:33.094 "uuid": "c218fb97-e95e-4a55-a93b-aa149bdcbb12", 00:13:33.094 "is_configured": true, 00:13:33.094 "data_offset": 2048, 00:13:33.094 "data_size": 63488 00:13:33.094 }, 00:13:33.094 { 00:13:33.094 "name": "BaseBdev3", 00:13:33.094 "uuid": "4850d69e-32e9-47de-bf0b-a5d6c01074cc", 00:13:33.094 "is_configured": true, 00:13:33.094 "data_offset": 2048, 00:13:33.094 "data_size": 63488 00:13:33.094 } 00:13:33.094 ] 00:13:33.094 } 00:13:33.094 } 00:13:33.094 }' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:33.094 BaseBdev2 00:13:33.094 BaseBdev3' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.094 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.353 11:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.353 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.353 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.353 11:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.353 [2024-11-20 11:25:41.004897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.353 "name": "Existed_Raid", 00:13:33.353 "uuid": "4f76f520-05c1-44ef-a7a8-2b7153e810dd", 00:13:33.353 "strip_size_kb": 0, 00:13:33.353 "state": "online", 00:13:33.353 "raid_level": "raid1", 00:13:33.353 "superblock": true, 00:13:33.353 "num_base_bdevs": 3, 00:13:33.353 "num_base_bdevs_discovered": 2, 00:13:33.353 "num_base_bdevs_operational": 2, 00:13:33.353 "base_bdevs_list": [ 00:13:33.353 { 00:13:33.353 "name": null, 00:13:33.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.353 "is_configured": false, 00:13:33.353 "data_offset": 0, 00:13:33.353 "data_size": 63488 00:13:33.353 }, 00:13:33.353 { 00:13:33.353 "name": "BaseBdev2", 00:13:33.353 "uuid": "c218fb97-e95e-4a55-a93b-aa149bdcbb12", 00:13:33.353 "is_configured": true, 00:13:33.353 "data_offset": 2048, 00:13:33.353 "data_size": 63488 00:13:33.353 }, 00:13:33.353 { 00:13:33.353 "name": "BaseBdev3", 00:13:33.353 "uuid": "4850d69e-32e9-47de-bf0b-a5d6c01074cc", 00:13:33.353 "is_configured": true, 00:13:33.353 "data_offset": 2048, 00:13:33.353 "data_size": 63488 00:13:33.353 } 00:13:33.353 ] 00:13:33.353 }' 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.353 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.921 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:33.921 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.922 [2024-11-20 11:25:41.657465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.922 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.179 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:34.179 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:34.179 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.180 [2024-11-20 11:25:41.806611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:34.180 [2024-11-20 11:25:41.806770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.180 [2024-11-20 11:25:41.892488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.180 [2024-11-20 11:25:41.892579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.180 [2024-11-20 11:25:41.892599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.180 BaseBdev2 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.180 11:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.180 [ 00:13:34.180 { 00:13:34.180 "name": "BaseBdev2", 00:13:34.180 "aliases": [ 00:13:34.180 "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d" 00:13:34.180 ], 00:13:34.180 "product_name": "Malloc disk", 00:13:34.180 "block_size": 512, 00:13:34.180 "num_blocks": 65536, 00:13:34.180 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:34.180 "assigned_rate_limits": { 00:13:34.180 "rw_ios_per_sec": 0, 00:13:34.180 "rw_mbytes_per_sec": 0, 00:13:34.180 "r_mbytes_per_sec": 0, 00:13:34.180 "w_mbytes_per_sec": 0 00:13:34.180 }, 00:13:34.180 "claimed": false, 00:13:34.180 "zoned": false, 00:13:34.180 "supported_io_types": { 00:13:34.180 "read": true, 00:13:34.180 "write": true, 00:13:34.180 "unmap": true, 00:13:34.180 "flush": true, 00:13:34.180 "reset": true, 00:13:34.180 "nvme_admin": false, 00:13:34.180 "nvme_io": false, 00:13:34.180 "nvme_io_md": false, 00:13:34.180 "write_zeroes": true, 00:13:34.180 "zcopy": true, 00:13:34.180 "get_zone_info": false, 00:13:34.180 "zone_management": false, 00:13:34.180 "zone_append": false, 00:13:34.180 "compare": false, 00:13:34.180 "compare_and_write": false, 00:13:34.180 "abort": true, 00:13:34.180 "seek_hole": false, 00:13:34.180 "seek_data": false, 00:13:34.180 "copy": true, 00:13:34.180 "nvme_iov_md": false 00:13:34.180 }, 00:13:34.180 "memory_domains": [ 00:13:34.180 { 00:13:34.180 "dma_device_id": "system", 00:13:34.180 "dma_device_type": 1 00:13:34.180 }, 00:13:34.180 { 00:13:34.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.180 "dma_device_type": 2 00:13:34.180 } 00:13:34.180 ], 00:13:34.180 "driver_specific": {} 00:13:34.180 } 00:13:34.180 ] 00:13:34.180 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.180 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.180 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:34.180 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.180 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:34.180 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.180 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.541 BaseBdev3 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.541 [ 00:13:34.541 { 00:13:34.541 "name": "BaseBdev3", 00:13:34.541 "aliases": [ 00:13:34.541 "10541ea2-15a8-4641-a00d-a0c42bafbbca" 00:13:34.541 ], 00:13:34.541 "product_name": "Malloc disk", 00:13:34.541 "block_size": 512, 00:13:34.541 "num_blocks": 65536, 00:13:34.541 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:34.541 "assigned_rate_limits": { 00:13:34.541 "rw_ios_per_sec": 0, 00:13:34.541 "rw_mbytes_per_sec": 0, 00:13:34.541 "r_mbytes_per_sec": 0, 00:13:34.541 "w_mbytes_per_sec": 0 00:13:34.541 }, 00:13:34.541 "claimed": false, 00:13:34.541 "zoned": false, 00:13:34.541 "supported_io_types": { 00:13:34.541 "read": true, 00:13:34.541 "write": true, 00:13:34.541 "unmap": true, 00:13:34.541 "flush": true, 00:13:34.541 "reset": true, 00:13:34.541 "nvme_admin": false, 00:13:34.541 "nvme_io": false, 00:13:34.541 "nvme_io_md": false, 00:13:34.541 "write_zeroes": true, 00:13:34.541 "zcopy": true, 00:13:34.541 "get_zone_info": false, 00:13:34.541 "zone_management": false, 00:13:34.541 "zone_append": false, 00:13:34.541 "compare": false, 00:13:34.541 "compare_and_write": false, 00:13:34.541 "abort": true, 00:13:34.541 "seek_hole": false, 00:13:34.541 "seek_data": false, 00:13:34.541 "copy": true, 00:13:34.541 "nvme_iov_md": false 00:13:34.541 }, 00:13:34.541 "memory_domains": [ 00:13:34.541 { 00:13:34.541 "dma_device_id": "system", 00:13:34.541 "dma_device_type": 1 00:13:34.541 }, 00:13:34.541 { 00:13:34.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.541 "dma_device_type": 2 00:13:34.541 } 00:13:34.541 ], 00:13:34.541 "driver_specific": {} 00:13:34.541 } 00:13:34.541 ] 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.541 [2024-11-20 11:25:42.096493] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.541 [2024-11-20 11:25:42.096552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.541 [2024-11-20 11:25:42.096579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.541 [2024-11-20 11:25:42.099046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.541 "name": "Existed_Raid", 00:13:34.541 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:34.541 "strip_size_kb": 0, 00:13:34.541 "state": "configuring", 00:13:34.541 "raid_level": "raid1", 00:13:34.541 "superblock": true, 00:13:34.541 "num_base_bdevs": 3, 00:13:34.541 "num_base_bdevs_discovered": 2, 00:13:34.541 "num_base_bdevs_operational": 3, 00:13:34.541 "base_bdevs_list": [ 00:13:34.541 { 00:13:34.541 "name": "BaseBdev1", 00:13:34.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.541 "is_configured": false, 00:13:34.541 "data_offset": 0, 00:13:34.541 "data_size": 0 00:13:34.541 }, 00:13:34.541 { 00:13:34.541 "name": "BaseBdev2", 00:13:34.541 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:34.541 "is_configured": true, 00:13:34.541 "data_offset": 2048, 00:13:34.541 "data_size": 63488 00:13:34.541 }, 00:13:34.541 { 00:13:34.541 "name": "BaseBdev3", 00:13:34.541 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:34.541 "is_configured": true, 00:13:34.541 "data_offset": 2048, 00:13:34.541 "data_size": 63488 00:13:34.541 } 00:13:34.541 ] 00:13:34.541 }' 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.541 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.801 [2024-11-20 11:25:42.632715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.801 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.059 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.059 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.059 "name": "Existed_Raid", 00:13:35.059 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:35.059 "strip_size_kb": 0, 00:13:35.059 "state": "configuring", 00:13:35.059 "raid_level": "raid1", 00:13:35.059 "superblock": true, 00:13:35.059 "num_base_bdevs": 3, 00:13:35.059 "num_base_bdevs_discovered": 1, 00:13:35.059 "num_base_bdevs_operational": 3, 00:13:35.059 "base_bdevs_list": [ 00:13:35.059 { 00:13:35.059 "name": "BaseBdev1", 00:13:35.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.059 "is_configured": false, 00:13:35.059 "data_offset": 0, 00:13:35.059 "data_size": 0 00:13:35.059 }, 00:13:35.059 { 00:13:35.059 "name": null, 00:13:35.059 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:35.059 "is_configured": false, 00:13:35.060 "data_offset": 0, 00:13:35.060 "data_size": 63488 00:13:35.060 }, 00:13:35.060 { 00:13:35.060 "name": "BaseBdev3", 00:13:35.060 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:35.060 "is_configured": true, 00:13:35.060 "data_offset": 2048, 00:13:35.060 "data_size": 63488 00:13:35.060 } 00:13:35.060 ] 00:13:35.060 }' 00:13:35.060 11:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.060 11:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.318 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.318 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:35.318 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.318 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.318 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.578 [2024-11-20 11:25:43.231389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.578 BaseBdev1 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.578 [ 00:13:35.578 { 00:13:35.578 "name": "BaseBdev1", 00:13:35.578 "aliases": [ 00:13:35.578 "dddbb391-811a-482b-a918-e138242146a5" 00:13:35.578 ], 00:13:35.578 "product_name": "Malloc disk", 00:13:35.578 "block_size": 512, 00:13:35.578 "num_blocks": 65536, 00:13:35.578 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:35.578 "assigned_rate_limits": { 00:13:35.578 "rw_ios_per_sec": 0, 00:13:35.578 "rw_mbytes_per_sec": 0, 00:13:35.578 "r_mbytes_per_sec": 0, 00:13:35.578 "w_mbytes_per_sec": 0 00:13:35.578 }, 00:13:35.578 "claimed": true, 00:13:35.578 "claim_type": "exclusive_write", 00:13:35.578 "zoned": false, 00:13:35.578 "supported_io_types": { 00:13:35.578 "read": true, 00:13:35.578 "write": true, 00:13:35.578 "unmap": true, 00:13:35.578 "flush": true, 00:13:35.578 "reset": true, 00:13:35.578 "nvme_admin": false, 00:13:35.578 "nvme_io": false, 00:13:35.578 "nvme_io_md": false, 00:13:35.578 "write_zeroes": true, 00:13:35.578 "zcopy": true, 00:13:35.578 "get_zone_info": false, 00:13:35.578 "zone_management": false, 00:13:35.578 "zone_append": false, 00:13:35.578 "compare": false, 00:13:35.578 "compare_and_write": false, 00:13:35.578 "abort": true, 00:13:35.578 "seek_hole": false, 00:13:35.578 "seek_data": false, 00:13:35.578 "copy": true, 00:13:35.578 "nvme_iov_md": false 00:13:35.578 }, 00:13:35.578 "memory_domains": [ 00:13:35.578 { 00:13:35.578 "dma_device_id": "system", 00:13:35.578 "dma_device_type": 1 00:13:35.578 }, 00:13:35.578 { 00:13:35.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.578 "dma_device_type": 2 00:13:35.578 } 00:13:35.578 ], 00:13:35.578 "driver_specific": {} 00:13:35.578 } 00:13:35.578 ] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.578 "name": "Existed_Raid", 00:13:35.578 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:35.578 "strip_size_kb": 0, 00:13:35.578 "state": "configuring", 00:13:35.578 "raid_level": "raid1", 00:13:35.578 "superblock": true, 00:13:35.578 "num_base_bdevs": 3, 00:13:35.578 "num_base_bdevs_discovered": 2, 00:13:35.578 "num_base_bdevs_operational": 3, 00:13:35.578 "base_bdevs_list": [ 00:13:35.578 { 00:13:35.578 "name": "BaseBdev1", 00:13:35.578 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:35.578 "is_configured": true, 00:13:35.578 "data_offset": 2048, 00:13:35.578 "data_size": 63488 00:13:35.578 }, 00:13:35.578 { 00:13:35.578 "name": null, 00:13:35.578 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:35.578 "is_configured": false, 00:13:35.578 "data_offset": 0, 00:13:35.578 "data_size": 63488 00:13:35.578 }, 00:13:35.578 { 00:13:35.578 "name": "BaseBdev3", 00:13:35.578 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:35.578 "is_configured": true, 00:13:35.578 "data_offset": 2048, 00:13:35.578 "data_size": 63488 00:13:35.578 } 00:13:35.578 ] 00:13:35.578 }' 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.578 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.147 [2024-11-20 11:25:43.815583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.147 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.148 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.148 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.148 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.148 "name": "Existed_Raid", 00:13:36.148 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:36.148 "strip_size_kb": 0, 00:13:36.148 "state": "configuring", 00:13:36.148 "raid_level": "raid1", 00:13:36.148 "superblock": true, 00:13:36.148 "num_base_bdevs": 3, 00:13:36.148 "num_base_bdevs_discovered": 1, 00:13:36.148 "num_base_bdevs_operational": 3, 00:13:36.148 "base_bdevs_list": [ 00:13:36.148 { 00:13:36.148 "name": "BaseBdev1", 00:13:36.148 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:36.148 "is_configured": true, 00:13:36.148 "data_offset": 2048, 00:13:36.148 "data_size": 63488 00:13:36.148 }, 00:13:36.148 { 00:13:36.148 "name": null, 00:13:36.148 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:36.148 "is_configured": false, 00:13:36.148 "data_offset": 0, 00:13:36.148 "data_size": 63488 00:13:36.148 }, 00:13:36.148 { 00:13:36.148 "name": null, 00:13:36.148 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:36.148 "is_configured": false, 00:13:36.148 "data_offset": 0, 00:13:36.148 "data_size": 63488 00:13:36.148 } 00:13:36.148 ] 00:13:36.148 }' 00:13:36.148 11:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.148 11:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.715 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.715 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.715 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.715 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:36.715 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.716 [2024-11-20 11:25:44.395835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.716 "name": "Existed_Raid", 00:13:36.716 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:36.716 "strip_size_kb": 0, 00:13:36.716 "state": "configuring", 00:13:36.716 "raid_level": "raid1", 00:13:36.716 "superblock": true, 00:13:36.716 "num_base_bdevs": 3, 00:13:36.716 "num_base_bdevs_discovered": 2, 00:13:36.716 "num_base_bdevs_operational": 3, 00:13:36.716 "base_bdevs_list": [ 00:13:36.716 { 00:13:36.716 "name": "BaseBdev1", 00:13:36.716 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:36.716 "is_configured": true, 00:13:36.716 "data_offset": 2048, 00:13:36.716 "data_size": 63488 00:13:36.716 }, 00:13:36.716 { 00:13:36.716 "name": null, 00:13:36.716 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:36.716 "is_configured": false, 00:13:36.716 "data_offset": 0, 00:13:36.716 "data_size": 63488 00:13:36.716 }, 00:13:36.716 { 00:13:36.716 "name": "BaseBdev3", 00:13:36.716 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:36.716 "is_configured": true, 00:13:36.716 "data_offset": 2048, 00:13:36.716 "data_size": 63488 00:13:36.716 } 00:13:36.716 ] 00:13:36.716 }' 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.716 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.284 11:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.284 [2024-11-20 11:25:44.968051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.284 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.285 "name": "Existed_Raid", 00:13:37.285 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:37.285 "strip_size_kb": 0, 00:13:37.285 "state": "configuring", 00:13:37.285 "raid_level": "raid1", 00:13:37.285 "superblock": true, 00:13:37.285 "num_base_bdevs": 3, 00:13:37.285 "num_base_bdevs_discovered": 1, 00:13:37.285 "num_base_bdevs_operational": 3, 00:13:37.285 "base_bdevs_list": [ 00:13:37.285 { 00:13:37.285 "name": null, 00:13:37.285 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:37.285 "is_configured": false, 00:13:37.285 "data_offset": 0, 00:13:37.285 "data_size": 63488 00:13:37.285 }, 00:13:37.285 { 00:13:37.285 "name": null, 00:13:37.285 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:37.285 "is_configured": false, 00:13:37.285 "data_offset": 0, 00:13:37.285 "data_size": 63488 00:13:37.285 }, 00:13:37.285 { 00:13:37.285 "name": "BaseBdev3", 00:13:37.285 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:37.285 "is_configured": true, 00:13:37.285 "data_offset": 2048, 00:13:37.285 "data_size": 63488 00:13:37.285 } 00:13:37.285 ] 00:13:37.285 }' 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.285 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.853 [2024-11-20 11:25:45.612652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.853 "name": "Existed_Raid", 00:13:37.853 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:37.853 "strip_size_kb": 0, 00:13:37.853 "state": "configuring", 00:13:37.853 "raid_level": "raid1", 00:13:37.853 "superblock": true, 00:13:37.853 "num_base_bdevs": 3, 00:13:37.853 "num_base_bdevs_discovered": 2, 00:13:37.853 "num_base_bdevs_operational": 3, 00:13:37.853 "base_bdevs_list": [ 00:13:37.853 { 00:13:37.853 "name": null, 00:13:37.853 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:37.853 "is_configured": false, 00:13:37.853 "data_offset": 0, 00:13:37.853 "data_size": 63488 00:13:37.853 }, 00:13:37.853 { 00:13:37.853 "name": "BaseBdev2", 00:13:37.853 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:37.853 "is_configured": true, 00:13:37.853 "data_offset": 2048, 00:13:37.853 "data_size": 63488 00:13:37.853 }, 00:13:37.853 { 00:13:37.853 "name": "BaseBdev3", 00:13:37.853 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:37.853 "is_configured": true, 00:13:37.853 "data_offset": 2048, 00:13:37.853 "data_size": 63488 00:13:37.853 } 00:13:37.853 ] 00:13:37.853 }' 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.853 11:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dddbb391-811a-482b-a918-e138242146a5 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.420 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 [2024-11-20 11:25:46.292899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:38.679 [2024-11-20 11:25:46.293194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:38.679 [2024-11-20 11:25:46.293218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.679 [2024-11-20 11:25:46.293531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:38.679 NewBaseBdev 00:13:38.679 [2024-11-20 11:25:46.293771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:38.679 [2024-11-20 11:25:46.293796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:38.679 [2024-11-20 11:25:46.293959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.679 [ 00:13:38.679 { 00:13:38.679 "name": "NewBaseBdev", 00:13:38.679 "aliases": [ 00:13:38.679 "dddbb391-811a-482b-a918-e138242146a5" 00:13:38.679 ], 00:13:38.679 "product_name": "Malloc disk", 00:13:38.679 "block_size": 512, 00:13:38.679 "num_blocks": 65536, 00:13:38.679 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:38.679 "assigned_rate_limits": { 00:13:38.679 "rw_ios_per_sec": 0, 00:13:38.679 "rw_mbytes_per_sec": 0, 00:13:38.679 "r_mbytes_per_sec": 0, 00:13:38.679 "w_mbytes_per_sec": 0 00:13:38.679 }, 00:13:38.679 "claimed": true, 00:13:38.679 "claim_type": "exclusive_write", 00:13:38.679 "zoned": false, 00:13:38.679 "supported_io_types": { 00:13:38.679 "read": true, 00:13:38.679 "write": true, 00:13:38.679 "unmap": true, 00:13:38.679 "flush": true, 00:13:38.679 "reset": true, 00:13:38.679 "nvme_admin": false, 00:13:38.679 "nvme_io": false, 00:13:38.679 "nvme_io_md": false, 00:13:38.679 "write_zeroes": true, 00:13:38.679 "zcopy": true, 00:13:38.679 "get_zone_info": false, 00:13:38.679 "zone_management": false, 00:13:38.679 "zone_append": false, 00:13:38.679 "compare": false, 00:13:38.679 "compare_and_write": false, 00:13:38.679 "abort": true, 00:13:38.679 "seek_hole": false, 00:13:38.679 "seek_data": false, 00:13:38.679 "copy": true, 00:13:38.679 "nvme_iov_md": false 00:13:38.679 }, 00:13:38.679 "memory_domains": [ 00:13:38.679 { 00:13:38.679 "dma_device_id": "system", 00:13:38.679 "dma_device_type": 1 00:13:38.679 }, 00:13:38.679 { 00:13:38.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.679 "dma_device_type": 2 00:13:38.679 } 00:13:38.679 ], 00:13:38.679 "driver_specific": {} 00:13:38.679 } 00:13:38.679 ] 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.679 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.680 "name": "Existed_Raid", 00:13:38.680 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:38.680 "strip_size_kb": 0, 00:13:38.680 "state": "online", 00:13:38.680 "raid_level": "raid1", 00:13:38.680 "superblock": true, 00:13:38.680 "num_base_bdevs": 3, 00:13:38.680 "num_base_bdevs_discovered": 3, 00:13:38.680 "num_base_bdevs_operational": 3, 00:13:38.680 "base_bdevs_list": [ 00:13:38.680 { 00:13:38.680 "name": "NewBaseBdev", 00:13:38.680 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:38.680 "is_configured": true, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 }, 00:13:38.680 { 00:13:38.680 "name": "BaseBdev2", 00:13:38.680 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:38.680 "is_configured": true, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 }, 00:13:38.680 { 00:13:38.680 "name": "BaseBdev3", 00:13:38.680 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:38.680 "is_configured": true, 00:13:38.680 "data_offset": 2048, 00:13:38.680 "data_size": 63488 00:13:38.680 } 00:13:38.680 ] 00:13:38.680 }' 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.680 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.247 [2024-11-20 11:25:46.861502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.247 "name": "Existed_Raid", 00:13:39.247 "aliases": [ 00:13:39.247 "b14456e9-6fc1-4965-a98e-0ea68a2136b1" 00:13:39.247 ], 00:13:39.247 "product_name": "Raid Volume", 00:13:39.247 "block_size": 512, 00:13:39.247 "num_blocks": 63488, 00:13:39.247 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:39.247 "assigned_rate_limits": { 00:13:39.247 "rw_ios_per_sec": 0, 00:13:39.247 "rw_mbytes_per_sec": 0, 00:13:39.247 "r_mbytes_per_sec": 0, 00:13:39.247 "w_mbytes_per_sec": 0 00:13:39.247 }, 00:13:39.247 "claimed": false, 00:13:39.247 "zoned": false, 00:13:39.247 "supported_io_types": { 00:13:39.247 "read": true, 00:13:39.247 "write": true, 00:13:39.247 "unmap": false, 00:13:39.247 "flush": false, 00:13:39.247 "reset": true, 00:13:39.247 "nvme_admin": false, 00:13:39.247 "nvme_io": false, 00:13:39.247 "nvme_io_md": false, 00:13:39.247 "write_zeroes": true, 00:13:39.247 "zcopy": false, 00:13:39.247 "get_zone_info": false, 00:13:39.247 "zone_management": false, 00:13:39.247 "zone_append": false, 00:13:39.247 "compare": false, 00:13:39.247 "compare_and_write": false, 00:13:39.247 "abort": false, 00:13:39.247 "seek_hole": false, 00:13:39.247 "seek_data": false, 00:13:39.247 "copy": false, 00:13:39.247 "nvme_iov_md": false 00:13:39.247 }, 00:13:39.247 "memory_domains": [ 00:13:39.247 { 00:13:39.247 "dma_device_id": "system", 00:13:39.247 "dma_device_type": 1 00:13:39.247 }, 00:13:39.247 { 00:13:39.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.247 "dma_device_type": 2 00:13:39.247 }, 00:13:39.247 { 00:13:39.247 "dma_device_id": "system", 00:13:39.247 "dma_device_type": 1 00:13:39.247 }, 00:13:39.247 { 00:13:39.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.247 "dma_device_type": 2 00:13:39.247 }, 00:13:39.247 { 00:13:39.247 "dma_device_id": "system", 00:13:39.247 "dma_device_type": 1 00:13:39.247 }, 00:13:39.247 { 00:13:39.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.247 "dma_device_type": 2 00:13:39.247 } 00:13:39.247 ], 00:13:39.247 "driver_specific": { 00:13:39.247 "raid": { 00:13:39.247 "uuid": "b14456e9-6fc1-4965-a98e-0ea68a2136b1", 00:13:39.247 "strip_size_kb": 0, 00:13:39.247 "state": "online", 00:13:39.247 "raid_level": "raid1", 00:13:39.247 "superblock": true, 00:13:39.247 "num_base_bdevs": 3, 00:13:39.247 "num_base_bdevs_discovered": 3, 00:13:39.247 "num_base_bdevs_operational": 3, 00:13:39.247 "base_bdevs_list": [ 00:13:39.247 { 00:13:39.247 "name": "NewBaseBdev", 00:13:39.247 "uuid": "dddbb391-811a-482b-a918-e138242146a5", 00:13:39.247 "is_configured": true, 00:13:39.247 "data_offset": 2048, 00:13:39.247 "data_size": 63488 00:13:39.247 }, 00:13:39.247 { 00:13:39.247 "name": "BaseBdev2", 00:13:39.247 "uuid": "fc1bec4b-3ae2-497a-aaf2-3cab1f64a62d", 00:13:39.247 "is_configured": true, 00:13:39.247 "data_offset": 2048, 00:13:39.247 "data_size": 63488 00:13:39.247 }, 00:13:39.247 { 00:13:39.247 "name": "BaseBdev3", 00:13:39.247 "uuid": "10541ea2-15a8-4641-a00d-a0c42bafbbca", 00:13:39.247 "is_configured": true, 00:13:39.247 "data_offset": 2048, 00:13:39.247 "data_size": 63488 00:13:39.247 } 00:13:39.247 ] 00:13:39.247 } 00:13:39.247 } 00:13:39.247 }' 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:39.247 BaseBdev2 00:13:39.247 BaseBdev3' 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.247 11:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.247 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.248 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.506 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.506 [2024-11-20 11:25:47.153200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.507 [2024-11-20 11:25:47.153245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.507 [2024-11-20 11:25:47.153335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.507 [2024-11-20 11:25:47.153748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.507 [2024-11-20 11:25:47.153777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68000 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68000 ']' 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68000 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68000 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.507 killing process with pid 68000 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68000' 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68000 00:13:39.507 [2024-11-20 11:25:47.191011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.507 11:25:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68000 00:13:39.765 [2024-11-20 11:25:47.467305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.147 11:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:41.147 00:13:41.147 real 0m11.856s 00:13:41.147 user 0m19.602s 00:13:41.147 sys 0m1.647s 00:13:41.147 11:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.147 11:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.147 ************************************ 00:13:41.147 END TEST raid_state_function_test_sb 00:13:41.147 ************************************ 00:13:41.147 11:25:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:41.147 11:25:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:41.147 11:25:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.147 11:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.147 ************************************ 00:13:41.147 START TEST raid_superblock_test 00:13:41.147 ************************************ 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68637 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68637 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68637 ']' 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.147 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.147 [2024-11-20 11:25:48.760359] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:41.147 [2024-11-20 11:25:48.760534] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68637 ] 00:13:41.147 [2024-11-20 11:25:48.948544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.407 [2024-11-20 11:25:49.102760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.666 [2024-11-20 11:25:49.312534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.666 [2024-11-20 11:25:49.312582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.926 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.186 malloc1 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.186 [2024-11-20 11:25:49.816349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:42.186 [2024-11-20 11:25:49.816434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.186 [2024-11-20 11:25:49.816469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:42.186 [2024-11-20 11:25:49.816485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.186 [2024-11-20 11:25:49.819382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.186 [2024-11-20 11:25:49.819427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:42.186 pt1 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.186 malloc2 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.186 [2024-11-20 11:25:49.873068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.186 [2024-11-20 11:25:49.873131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.186 [2024-11-20 11:25:49.873164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:42.186 [2024-11-20 11:25:49.873180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.186 [2024-11-20 11:25:49.875950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.186 [2024-11-20 11:25:49.875992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.186 pt2 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.186 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.187 malloc3 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.187 [2024-11-20 11:25:49.948834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.187 [2024-11-20 11:25:49.948905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.187 [2024-11-20 11:25:49.948944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:42.187 [2024-11-20 11:25:49.948963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.187 [2024-11-20 11:25:49.952267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.187 [2024-11-20 11:25:49.952318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.187 pt3 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.187 [2024-11-20 11:25:49.961079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.187 [2024-11-20 11:25:49.963937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.187 [2024-11-20 11:25:49.964071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.187 [2024-11-20 11:25:49.964332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:42.187 [2024-11-20 11:25:49.964378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.187 [2024-11-20 11:25:49.964779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:42.187 [2024-11-20 11:25:49.965065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:42.187 [2024-11-20 11:25:49.965101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:42.187 [2024-11-20 11:25:49.965389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.187 11:25:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.187 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.187 "name": "raid_bdev1", 00:13:42.187 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:42.187 "strip_size_kb": 0, 00:13:42.187 "state": "online", 00:13:42.187 "raid_level": "raid1", 00:13:42.187 "superblock": true, 00:13:42.187 "num_base_bdevs": 3, 00:13:42.187 "num_base_bdevs_discovered": 3, 00:13:42.187 "num_base_bdevs_operational": 3, 00:13:42.187 "base_bdevs_list": [ 00:13:42.187 { 00:13:42.187 "name": "pt1", 00:13:42.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.187 "is_configured": true, 00:13:42.187 "data_offset": 2048, 00:13:42.187 "data_size": 63488 00:13:42.187 }, 00:13:42.187 { 00:13:42.187 "name": "pt2", 00:13:42.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.187 "is_configured": true, 00:13:42.187 "data_offset": 2048, 00:13:42.187 "data_size": 63488 00:13:42.187 }, 00:13:42.187 { 00:13:42.187 "name": "pt3", 00:13:42.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.187 "is_configured": true, 00:13:42.187 "data_offset": 2048, 00:13:42.187 "data_size": 63488 00:13:42.187 } 00:13:42.187 ] 00:13:42.187 }' 00:13:42.187 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.187 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.755 [2024-11-20 11:25:50.477875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.755 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.756 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.756 "name": "raid_bdev1", 00:13:42.756 "aliases": [ 00:13:42.756 "91a505bc-4a52-481e-a5ff-41999f797a93" 00:13:42.756 ], 00:13:42.756 "product_name": "Raid Volume", 00:13:42.756 "block_size": 512, 00:13:42.756 "num_blocks": 63488, 00:13:42.756 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:42.756 "assigned_rate_limits": { 00:13:42.756 "rw_ios_per_sec": 0, 00:13:42.756 "rw_mbytes_per_sec": 0, 00:13:42.756 "r_mbytes_per_sec": 0, 00:13:42.756 "w_mbytes_per_sec": 0 00:13:42.756 }, 00:13:42.756 "claimed": false, 00:13:42.756 "zoned": false, 00:13:42.756 "supported_io_types": { 00:13:42.756 "read": true, 00:13:42.756 "write": true, 00:13:42.756 "unmap": false, 00:13:42.756 "flush": false, 00:13:42.756 "reset": true, 00:13:42.756 "nvme_admin": false, 00:13:42.756 "nvme_io": false, 00:13:42.756 "nvme_io_md": false, 00:13:42.756 "write_zeroes": true, 00:13:42.756 "zcopy": false, 00:13:42.756 "get_zone_info": false, 00:13:42.756 "zone_management": false, 00:13:42.756 "zone_append": false, 00:13:42.756 "compare": false, 00:13:42.756 "compare_and_write": false, 00:13:42.756 "abort": false, 00:13:42.756 "seek_hole": false, 00:13:42.756 "seek_data": false, 00:13:42.756 "copy": false, 00:13:42.756 "nvme_iov_md": false 00:13:42.756 }, 00:13:42.756 "memory_domains": [ 00:13:42.756 { 00:13:42.756 "dma_device_id": "system", 00:13:42.756 "dma_device_type": 1 00:13:42.756 }, 00:13:42.756 { 00:13:42.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.756 "dma_device_type": 2 00:13:42.756 }, 00:13:42.756 { 00:13:42.756 "dma_device_id": "system", 00:13:42.756 "dma_device_type": 1 00:13:42.756 }, 00:13:42.756 { 00:13:42.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.756 "dma_device_type": 2 00:13:42.756 }, 00:13:42.756 { 00:13:42.756 "dma_device_id": "system", 00:13:42.756 "dma_device_type": 1 00:13:42.756 }, 00:13:42.756 { 00:13:42.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.756 "dma_device_type": 2 00:13:42.756 } 00:13:42.756 ], 00:13:42.756 "driver_specific": { 00:13:42.756 "raid": { 00:13:42.756 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:42.756 "strip_size_kb": 0, 00:13:42.756 "state": "online", 00:13:42.756 "raid_level": "raid1", 00:13:42.756 "superblock": true, 00:13:42.756 "num_base_bdevs": 3, 00:13:42.756 "num_base_bdevs_discovered": 3, 00:13:42.756 "num_base_bdevs_operational": 3, 00:13:42.756 "base_bdevs_list": [ 00:13:42.756 { 00:13:42.756 "name": "pt1", 00:13:42.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.756 "is_configured": true, 00:13:42.756 "data_offset": 2048, 00:13:42.756 "data_size": 63488 00:13:42.756 }, 00:13:42.756 { 00:13:42.756 "name": "pt2", 00:13:42.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.756 "is_configured": true, 00:13:42.756 "data_offset": 2048, 00:13:42.756 "data_size": 63488 00:13:42.756 }, 00:13:42.756 { 00:13:42.756 "name": "pt3", 00:13:42.756 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.756 "is_configured": true, 00:13:42.756 "data_offset": 2048, 00:13:42.756 "data_size": 63488 00:13:42.756 } 00:13:42.756 ] 00:13:42.756 } 00:13:42.756 } 00:13:42.756 }' 00:13:42.756 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.756 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:42.756 pt2 00:13:42.756 pt3' 00:13:42.756 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.014 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.014 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.014 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 [2024-11-20 11:25:50.785865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=91a505bc-4a52-481e-a5ff-41999f797a93 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 91a505bc-4a52-481e-a5ff-41999f797a93 ']' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 [2024-11-20 11:25:50.829517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.015 [2024-11-20 11:25:50.829556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.015 [2024-11-20 11:25:50.829682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.015 [2024-11-20 11:25:50.829784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.015 [2024-11-20 11:25:50.829801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:43.015 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.273 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.274 [2024-11-20 11:25:50.977634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:43.274 [2024-11-20 11:25:50.980095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:43.274 [2024-11-20 11:25:50.980172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:43.274 [2024-11-20 11:25:50.980246] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:43.274 [2024-11-20 11:25:50.980319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:43.274 [2024-11-20 11:25:50.980354] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:43.274 [2024-11-20 11:25:50.980383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.274 [2024-11-20 11:25:50.980398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:43.274 request: 00:13:43.274 { 00:13:43.274 "name": "raid_bdev1", 00:13:43.274 "raid_level": "raid1", 00:13:43.274 "base_bdevs": [ 00:13:43.274 "malloc1", 00:13:43.274 "malloc2", 00:13:43.274 "malloc3" 00:13:43.274 ], 00:13:43.274 "superblock": false, 00:13:43.274 "method": "bdev_raid_create", 00:13:43.274 "req_id": 1 00:13:43.274 } 00:13:43.274 Got JSON-RPC error response 00:13:43.274 response: 00:13:43.274 { 00:13:43.274 "code": -17, 00:13:43.274 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:43.274 } 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.274 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.274 [2024-11-20 11:25:51.041560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:43.274 [2024-11-20 11:25:51.041648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.274 [2024-11-20 11:25:51.041687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:43.274 [2024-11-20 11:25:51.041703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.274 [2024-11-20 11:25:51.044511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.274 [2024-11-20 11:25:51.044553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:43.274 [2024-11-20 11:25:51.044672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:43.274 [2024-11-20 11:25:51.044740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.274 pt1 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.274 "name": "raid_bdev1", 00:13:43.274 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:43.274 "strip_size_kb": 0, 00:13:43.274 "state": "configuring", 00:13:43.274 "raid_level": "raid1", 00:13:43.274 "superblock": true, 00:13:43.274 "num_base_bdevs": 3, 00:13:43.274 "num_base_bdevs_discovered": 1, 00:13:43.274 "num_base_bdevs_operational": 3, 00:13:43.274 "base_bdevs_list": [ 00:13:43.274 { 00:13:43.274 "name": "pt1", 00:13:43.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.274 "is_configured": true, 00:13:43.274 "data_offset": 2048, 00:13:43.274 "data_size": 63488 00:13:43.274 }, 00:13:43.274 { 00:13:43.274 "name": null, 00:13:43.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.274 "is_configured": false, 00:13:43.274 "data_offset": 2048, 00:13:43.274 "data_size": 63488 00:13:43.274 }, 00:13:43.274 { 00:13:43.274 "name": null, 00:13:43.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.274 "is_configured": false, 00:13:43.274 "data_offset": 2048, 00:13:43.274 "data_size": 63488 00:13:43.274 } 00:13:43.274 ] 00:13:43.274 }' 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.274 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.841 [2024-11-20 11:25:51.505753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:43.841 [2024-11-20 11:25:51.505825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.841 [2024-11-20 11:25:51.505858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:43.841 [2024-11-20 11:25:51.505874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.841 [2024-11-20 11:25:51.506451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.841 [2024-11-20 11:25:51.506498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:43.841 [2024-11-20 11:25:51.506610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:43.841 [2024-11-20 11:25:51.506664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.841 pt2 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.841 [2024-11-20 11:25:51.513742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.841 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.842 "name": "raid_bdev1", 00:13:43.842 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:43.842 "strip_size_kb": 0, 00:13:43.842 "state": "configuring", 00:13:43.842 "raid_level": "raid1", 00:13:43.842 "superblock": true, 00:13:43.842 "num_base_bdevs": 3, 00:13:43.842 "num_base_bdevs_discovered": 1, 00:13:43.842 "num_base_bdevs_operational": 3, 00:13:43.842 "base_bdevs_list": [ 00:13:43.842 { 00:13:43.842 "name": "pt1", 00:13:43.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.842 "is_configured": true, 00:13:43.842 "data_offset": 2048, 00:13:43.842 "data_size": 63488 00:13:43.842 }, 00:13:43.842 { 00:13:43.842 "name": null, 00:13:43.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.842 "is_configured": false, 00:13:43.842 "data_offset": 0, 00:13:43.842 "data_size": 63488 00:13:43.842 }, 00:13:43.842 { 00:13:43.842 "name": null, 00:13:43.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.842 "is_configured": false, 00:13:43.842 "data_offset": 2048, 00:13:43.842 "data_size": 63488 00:13:43.842 } 00:13:43.842 ] 00:13:43.842 }' 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.842 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.408 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:44.408 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.408 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.408 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.408 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.408 [2024-11-20 11:25:52.025859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.408 [2024-11-20 11:25:52.025942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.408 [2024-11-20 11:25:52.025975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:44.408 [2024-11-20 11:25:52.025993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.409 [2024-11-20 11:25:52.026567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.409 [2024-11-20 11:25:52.026606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.409 [2024-11-20 11:25:52.026751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:44.409 [2024-11-20 11:25:52.026809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.409 pt2 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.409 [2024-11-20 11:25:52.033834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:44.409 [2024-11-20 11:25:52.033889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.409 [2024-11-20 11:25:52.033918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:44.409 [2024-11-20 11:25:52.033938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.409 [2024-11-20 11:25:52.034389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.409 [2024-11-20 11:25:52.034437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:44.409 [2024-11-20 11:25:52.034515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:44.409 [2024-11-20 11:25:52.034548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:44.409 [2024-11-20 11:25:52.034721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:44.409 [2024-11-20 11:25:52.034746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.409 [2024-11-20 11:25:52.035055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:44.409 [2024-11-20 11:25:52.035274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:44.409 [2024-11-20 11:25:52.035301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:44.409 [2024-11-20 11:25:52.035474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.409 pt3 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.409 "name": "raid_bdev1", 00:13:44.409 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:44.409 "strip_size_kb": 0, 00:13:44.409 "state": "online", 00:13:44.409 "raid_level": "raid1", 00:13:44.409 "superblock": true, 00:13:44.409 "num_base_bdevs": 3, 00:13:44.409 "num_base_bdevs_discovered": 3, 00:13:44.409 "num_base_bdevs_operational": 3, 00:13:44.409 "base_bdevs_list": [ 00:13:44.409 { 00:13:44.409 "name": "pt1", 00:13:44.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.409 "is_configured": true, 00:13:44.409 "data_offset": 2048, 00:13:44.409 "data_size": 63488 00:13:44.409 }, 00:13:44.409 { 00:13:44.409 "name": "pt2", 00:13:44.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.409 "is_configured": true, 00:13:44.409 "data_offset": 2048, 00:13:44.409 "data_size": 63488 00:13:44.409 }, 00:13:44.409 { 00:13:44.409 "name": "pt3", 00:13:44.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.409 "is_configured": true, 00:13:44.409 "data_offset": 2048, 00:13:44.409 "data_size": 63488 00:13:44.409 } 00:13:44.409 ] 00:13:44.409 }' 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.409 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.976 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:44.976 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:44.976 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:44.976 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:44.976 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:44.976 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.977 [2024-11-20 11:25:52.562445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:44.977 "name": "raid_bdev1", 00:13:44.977 "aliases": [ 00:13:44.977 "91a505bc-4a52-481e-a5ff-41999f797a93" 00:13:44.977 ], 00:13:44.977 "product_name": "Raid Volume", 00:13:44.977 "block_size": 512, 00:13:44.977 "num_blocks": 63488, 00:13:44.977 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:44.977 "assigned_rate_limits": { 00:13:44.977 "rw_ios_per_sec": 0, 00:13:44.977 "rw_mbytes_per_sec": 0, 00:13:44.977 "r_mbytes_per_sec": 0, 00:13:44.977 "w_mbytes_per_sec": 0 00:13:44.977 }, 00:13:44.977 "claimed": false, 00:13:44.977 "zoned": false, 00:13:44.977 "supported_io_types": { 00:13:44.977 "read": true, 00:13:44.977 "write": true, 00:13:44.977 "unmap": false, 00:13:44.977 "flush": false, 00:13:44.977 "reset": true, 00:13:44.977 "nvme_admin": false, 00:13:44.977 "nvme_io": false, 00:13:44.977 "nvme_io_md": false, 00:13:44.977 "write_zeroes": true, 00:13:44.977 "zcopy": false, 00:13:44.977 "get_zone_info": false, 00:13:44.977 "zone_management": false, 00:13:44.977 "zone_append": false, 00:13:44.977 "compare": false, 00:13:44.977 "compare_and_write": false, 00:13:44.977 "abort": false, 00:13:44.977 "seek_hole": false, 00:13:44.977 "seek_data": false, 00:13:44.977 "copy": false, 00:13:44.977 "nvme_iov_md": false 00:13:44.977 }, 00:13:44.977 "memory_domains": [ 00:13:44.977 { 00:13:44.977 "dma_device_id": "system", 00:13:44.977 "dma_device_type": 1 00:13:44.977 }, 00:13:44.977 { 00:13:44.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.977 "dma_device_type": 2 00:13:44.977 }, 00:13:44.977 { 00:13:44.977 "dma_device_id": "system", 00:13:44.977 "dma_device_type": 1 00:13:44.977 }, 00:13:44.977 { 00:13:44.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.977 "dma_device_type": 2 00:13:44.977 }, 00:13:44.977 { 00:13:44.977 "dma_device_id": "system", 00:13:44.977 "dma_device_type": 1 00:13:44.977 }, 00:13:44.977 { 00:13:44.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.977 "dma_device_type": 2 00:13:44.977 } 00:13:44.977 ], 00:13:44.977 "driver_specific": { 00:13:44.977 "raid": { 00:13:44.977 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:44.977 "strip_size_kb": 0, 00:13:44.977 "state": "online", 00:13:44.977 "raid_level": "raid1", 00:13:44.977 "superblock": true, 00:13:44.977 "num_base_bdevs": 3, 00:13:44.977 "num_base_bdevs_discovered": 3, 00:13:44.977 "num_base_bdevs_operational": 3, 00:13:44.977 "base_bdevs_list": [ 00:13:44.977 { 00:13:44.977 "name": "pt1", 00:13:44.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.977 "is_configured": true, 00:13:44.977 "data_offset": 2048, 00:13:44.977 "data_size": 63488 00:13:44.977 }, 00:13:44.977 { 00:13:44.977 "name": "pt2", 00:13:44.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.977 "is_configured": true, 00:13:44.977 "data_offset": 2048, 00:13:44.977 "data_size": 63488 00:13:44.977 }, 00:13:44.977 { 00:13:44.977 "name": "pt3", 00:13:44.977 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.977 "is_configured": true, 00:13:44.977 "data_offset": 2048, 00:13:44.977 "data_size": 63488 00:13:44.977 } 00:13:44.977 ] 00:13:44.977 } 00:13:44.977 } 00:13:44.977 }' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:44.977 pt2 00:13:44.977 pt3' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:44.977 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.236 [2024-11-20 11:25:52.886470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 91a505bc-4a52-481e-a5ff-41999f797a93 '!=' 91a505bc-4a52-481e-a5ff-41999f797a93 ']' 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.236 [2024-11-20 11:25:52.938188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.236 "name": "raid_bdev1", 00:13:45.236 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:45.236 "strip_size_kb": 0, 00:13:45.236 "state": "online", 00:13:45.236 "raid_level": "raid1", 00:13:45.236 "superblock": true, 00:13:45.236 "num_base_bdevs": 3, 00:13:45.236 "num_base_bdevs_discovered": 2, 00:13:45.236 "num_base_bdevs_operational": 2, 00:13:45.236 "base_bdevs_list": [ 00:13:45.236 { 00:13:45.236 "name": null, 00:13:45.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.236 "is_configured": false, 00:13:45.236 "data_offset": 0, 00:13:45.236 "data_size": 63488 00:13:45.236 }, 00:13:45.236 { 00:13:45.236 "name": "pt2", 00:13:45.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.236 "is_configured": true, 00:13:45.236 "data_offset": 2048, 00:13:45.236 "data_size": 63488 00:13:45.236 }, 00:13:45.236 { 00:13:45.236 "name": "pt3", 00:13:45.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.236 "is_configured": true, 00:13:45.236 "data_offset": 2048, 00:13:45.236 "data_size": 63488 00:13:45.236 } 00:13:45.236 ] 00:13:45.236 }' 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.236 11:25:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 [2024-11-20 11:25:53.462268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.805 [2024-11-20 11:25:53.462308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.805 [2024-11-20 11:25:53.462416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.805 [2024-11-20 11:25:53.462497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.805 [2024-11-20 11:25:53.462520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 [2024-11-20 11:25:53.542244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:45.805 [2024-11-20 11:25:53.542311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.805 [2024-11-20 11:25:53.542337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:45.805 [2024-11-20 11:25:53.542355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.805 [2024-11-20 11:25:53.545256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.805 [2024-11-20 11:25:53.545302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:45.805 [2024-11-20 11:25:53.545399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:45.805 [2024-11-20 11:25:53.545470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:45.805 pt2 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.805 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.805 "name": "raid_bdev1", 00:13:45.805 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:45.805 "strip_size_kb": 0, 00:13:45.805 "state": "configuring", 00:13:45.805 "raid_level": "raid1", 00:13:45.805 "superblock": true, 00:13:45.805 "num_base_bdevs": 3, 00:13:45.805 "num_base_bdevs_discovered": 1, 00:13:45.805 "num_base_bdevs_operational": 2, 00:13:45.805 "base_bdevs_list": [ 00:13:45.805 { 00:13:45.805 "name": null, 00:13:45.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.805 "is_configured": false, 00:13:45.805 "data_offset": 2048, 00:13:45.805 "data_size": 63488 00:13:45.805 }, 00:13:45.805 { 00:13:45.805 "name": "pt2", 00:13:45.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.806 "is_configured": true, 00:13:45.806 "data_offset": 2048, 00:13:45.806 "data_size": 63488 00:13:45.806 }, 00:13:45.806 { 00:13:45.806 "name": null, 00:13:45.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.806 "is_configured": false, 00:13:45.806 "data_offset": 2048, 00:13:45.806 "data_size": 63488 00:13:45.806 } 00:13:45.806 ] 00:13:45.806 }' 00:13:45.806 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.806 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.373 [2024-11-20 11:25:54.046422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:46.373 [2024-11-20 11:25:54.046500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.373 [2024-11-20 11:25:54.046532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:46.373 [2024-11-20 11:25:54.046551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.373 [2024-11-20 11:25:54.047147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.373 [2024-11-20 11:25:54.047187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:46.373 [2024-11-20 11:25:54.047313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:46.373 [2024-11-20 11:25:54.047355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:46.373 [2024-11-20 11:25:54.047503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:46.373 [2024-11-20 11:25:54.047525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.373 [2024-11-20 11:25:54.047879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:46.373 [2024-11-20 11:25:54.048092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:46.373 [2024-11-20 11:25:54.048115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:46.373 [2024-11-20 11:25:54.048304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.373 pt3 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.373 "name": "raid_bdev1", 00:13:46.373 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:46.373 "strip_size_kb": 0, 00:13:46.373 "state": "online", 00:13:46.373 "raid_level": "raid1", 00:13:46.373 "superblock": true, 00:13:46.373 "num_base_bdevs": 3, 00:13:46.373 "num_base_bdevs_discovered": 2, 00:13:46.373 "num_base_bdevs_operational": 2, 00:13:46.373 "base_bdevs_list": [ 00:13:46.373 { 00:13:46.373 "name": null, 00:13:46.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.373 "is_configured": false, 00:13:46.373 "data_offset": 2048, 00:13:46.373 "data_size": 63488 00:13:46.373 }, 00:13:46.373 { 00:13:46.373 "name": "pt2", 00:13:46.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.373 "is_configured": true, 00:13:46.373 "data_offset": 2048, 00:13:46.373 "data_size": 63488 00:13:46.373 }, 00:13:46.373 { 00:13:46.373 "name": "pt3", 00:13:46.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.373 "is_configured": true, 00:13:46.373 "data_offset": 2048, 00:13:46.373 "data_size": 63488 00:13:46.373 } 00:13:46.373 ] 00:13:46.373 }' 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.373 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.949 [2024-11-20 11:25:54.614561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.949 [2024-11-20 11:25:54.614607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.949 [2024-11-20 11:25:54.614717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.949 [2024-11-20 11:25:54.614804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.949 [2024-11-20 11:25:54.614829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:46.949 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.950 [2024-11-20 11:25:54.690651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:46.950 [2024-11-20 11:25:54.690713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.950 [2024-11-20 11:25:54.690745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:46.950 [2024-11-20 11:25:54.690761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.950 [2024-11-20 11:25:54.693841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.950 [2024-11-20 11:25:54.693885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:46.950 [2024-11-20 11:25:54.693999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:46.950 [2024-11-20 11:25:54.694057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:46.950 [2024-11-20 11:25:54.694235] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:46.950 [2024-11-20 11:25:54.694260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.950 [2024-11-20 11:25:54.694286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:46.950 [2024-11-20 11:25:54.694359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.950 pt1 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.950 "name": "raid_bdev1", 00:13:46.950 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:46.950 "strip_size_kb": 0, 00:13:46.950 "state": "configuring", 00:13:46.950 "raid_level": "raid1", 00:13:46.950 "superblock": true, 00:13:46.950 "num_base_bdevs": 3, 00:13:46.950 "num_base_bdevs_discovered": 1, 00:13:46.950 "num_base_bdevs_operational": 2, 00:13:46.950 "base_bdevs_list": [ 00:13:46.950 { 00:13:46.950 "name": null, 00:13:46.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.950 "is_configured": false, 00:13:46.950 "data_offset": 2048, 00:13:46.950 "data_size": 63488 00:13:46.950 }, 00:13:46.950 { 00:13:46.950 "name": "pt2", 00:13:46.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.950 "is_configured": true, 00:13:46.950 "data_offset": 2048, 00:13:46.950 "data_size": 63488 00:13:46.950 }, 00:13:46.950 { 00:13:46.950 "name": null, 00:13:46.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.950 "is_configured": false, 00:13:46.950 "data_offset": 2048, 00:13:46.950 "data_size": 63488 00:13:46.950 } 00:13:46.950 ] 00:13:46.950 }' 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.950 11:25:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.519 [2024-11-20 11:25:55.290880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:47.519 [2024-11-20 11:25:55.290955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.519 [2024-11-20 11:25:55.290989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:47.519 [2024-11-20 11:25:55.291020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.519 [2024-11-20 11:25:55.291643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.519 [2024-11-20 11:25:55.291691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:47.519 [2024-11-20 11:25:55.291802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:47.519 [2024-11-20 11:25:55.291871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:47.519 [2024-11-20 11:25:55.292048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:47.519 [2024-11-20 11:25:55.292065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.519 [2024-11-20 11:25:55.292384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:47.519 [2024-11-20 11:25:55.292637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:47.519 [2024-11-20 11:25:55.292683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:47.519 [2024-11-20 11:25:55.292857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.519 pt3 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.519 "name": "raid_bdev1", 00:13:47.519 "uuid": "91a505bc-4a52-481e-a5ff-41999f797a93", 00:13:47.519 "strip_size_kb": 0, 00:13:47.519 "state": "online", 00:13:47.519 "raid_level": "raid1", 00:13:47.519 "superblock": true, 00:13:47.519 "num_base_bdevs": 3, 00:13:47.519 "num_base_bdevs_discovered": 2, 00:13:47.519 "num_base_bdevs_operational": 2, 00:13:47.519 "base_bdevs_list": [ 00:13:47.519 { 00:13:47.519 "name": null, 00:13:47.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.519 "is_configured": false, 00:13:47.519 "data_offset": 2048, 00:13:47.519 "data_size": 63488 00:13:47.519 }, 00:13:47.519 { 00:13:47.519 "name": "pt2", 00:13:47.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.519 "is_configured": true, 00:13:47.519 "data_offset": 2048, 00:13:47.519 "data_size": 63488 00:13:47.519 }, 00:13:47.519 { 00:13:47.519 "name": "pt3", 00:13:47.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.519 "is_configured": true, 00:13:47.519 "data_offset": 2048, 00:13:47.519 "data_size": 63488 00:13:47.519 } 00:13:47.519 ] 00:13:47.519 }' 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.519 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:48.087 [2024-11-20 11:25:55.883406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 91a505bc-4a52-481e-a5ff-41999f797a93 '!=' 91a505bc-4a52-481e-a5ff-41999f797a93 ']' 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68637 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68637 ']' 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68637 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:48.087 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.345 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68637 00:13:48.345 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.345 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.345 killing process with pid 68637 00:13:48.345 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68637' 00:13:48.345 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68637 00:13:48.345 [2024-11-20 11:25:55.952144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.345 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68637 00:13:48.345 [2024-11-20 11:25:55.952283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.345 [2024-11-20 11:25:55.952376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.345 [2024-11-20 11:25:55.952397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:48.603 [2024-11-20 11:25:56.224673] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.539 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:49.539 00:13:49.539 real 0m8.615s 00:13:49.539 user 0m14.085s 00:13:49.539 sys 0m1.209s 00:13:49.539 11:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.539 ************************************ 00:13:49.539 END TEST raid_superblock_test 00:13:49.539 ************************************ 00:13:49.539 11:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.539 11:25:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:49.539 11:25:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:49.539 11:25:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.539 11:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.539 ************************************ 00:13:49.539 START TEST raid_read_error_test 00:13:49.539 ************************************ 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gd1V3tVFja 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69095 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69095 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69095 ']' 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.539 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.815 [2024-11-20 11:25:57.442005] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:49.815 [2024-11-20 11:25:57.442190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69095 ] 00:13:49.815 [2024-11-20 11:25:57.630721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.076 [2024-11-20 11:25:57.788710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.333 [2024-11-20 11:25:58.010903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.333 [2024-11-20 11:25:58.010983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 BaseBdev1_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 true 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 [2024-11-20 11:25:58.527263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:50.900 [2024-11-20 11:25:58.527328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.900 [2024-11-20 11:25:58.527357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:50.900 [2024-11-20 11:25:58.527376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.900 [2024-11-20 11:25:58.530262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.900 [2024-11-20 11:25:58.530309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.900 BaseBdev1 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 BaseBdev2_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 true 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 [2024-11-20 11:25:58.584256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:50.900 [2024-11-20 11:25:58.584321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.900 [2024-11-20 11:25:58.584352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:50.900 [2024-11-20 11:25:58.584370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.900 [2024-11-20 11:25:58.587245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.900 [2024-11-20 11:25:58.587290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.900 BaseBdev2 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 BaseBdev3_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 true 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 [2024-11-20 11:25:58.664030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:50.900 [2024-11-20 11:25:58.664101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.900 [2024-11-20 11:25:58.664132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:50.900 [2024-11-20 11:25:58.664154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.900 [2024-11-20 11:25:58.667563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.900 [2024-11-20 11:25:58.667632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:50.900 BaseBdev3 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 [2024-11-20 11:25:58.676638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.900 [2024-11-20 11:25:58.679983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.900 [2024-11-20 11:25:58.680110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.900 [2024-11-20 11:25:58.680400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:50.900 [2024-11-20 11:25:58.680431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.900 [2024-11-20 11:25:58.680772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:50.900 [2024-11-20 11:25:58.681018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:50.900 [2024-11-20 11:25:58.681050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:50.900 [2024-11-20 11:25:58.681285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.900 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.901 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.901 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.901 "name": "raid_bdev1", 00:13:50.901 "uuid": "d8aa699e-3a93-483d-bc9b-9ded5282239c", 00:13:50.901 "strip_size_kb": 0, 00:13:50.901 "state": "online", 00:13:50.901 "raid_level": "raid1", 00:13:50.901 "superblock": true, 00:13:50.901 "num_base_bdevs": 3, 00:13:50.901 "num_base_bdevs_discovered": 3, 00:13:50.901 "num_base_bdevs_operational": 3, 00:13:50.901 "base_bdevs_list": [ 00:13:50.901 { 00:13:50.901 "name": "BaseBdev1", 00:13:50.901 "uuid": "e8670feb-e254-5b4f-9204-7f8de9f77e60", 00:13:50.901 "is_configured": true, 00:13:50.901 "data_offset": 2048, 00:13:50.901 "data_size": 63488 00:13:50.901 }, 00:13:50.901 { 00:13:50.901 "name": "BaseBdev2", 00:13:50.901 "uuid": "7a124848-af67-51a6-b18c-b85fbc885c5d", 00:13:50.901 "is_configured": true, 00:13:50.901 "data_offset": 2048, 00:13:50.901 "data_size": 63488 00:13:50.901 }, 00:13:50.901 { 00:13:50.901 "name": "BaseBdev3", 00:13:50.901 "uuid": "e83541d8-dd8f-58ef-aebc-987b120bfb12", 00:13:50.901 "is_configured": true, 00:13:50.901 "data_offset": 2048, 00:13:50.901 "data_size": 63488 00:13:50.901 } 00:13:50.901 ] 00:13:50.901 }' 00:13:50.901 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.901 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.470 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:51.470 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.470 [2024-11-20 11:25:59.302128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.403 "name": "raid_bdev1", 00:13:52.403 "uuid": "d8aa699e-3a93-483d-bc9b-9ded5282239c", 00:13:52.403 "strip_size_kb": 0, 00:13:52.403 "state": "online", 00:13:52.403 "raid_level": "raid1", 00:13:52.403 "superblock": true, 00:13:52.403 "num_base_bdevs": 3, 00:13:52.403 "num_base_bdevs_discovered": 3, 00:13:52.403 "num_base_bdevs_operational": 3, 00:13:52.403 "base_bdevs_list": [ 00:13:52.403 { 00:13:52.403 "name": "BaseBdev1", 00:13:52.403 "uuid": "e8670feb-e254-5b4f-9204-7f8de9f77e60", 00:13:52.403 "is_configured": true, 00:13:52.403 "data_offset": 2048, 00:13:52.403 "data_size": 63488 00:13:52.403 }, 00:13:52.403 { 00:13:52.403 "name": "BaseBdev2", 00:13:52.403 "uuid": "7a124848-af67-51a6-b18c-b85fbc885c5d", 00:13:52.403 "is_configured": true, 00:13:52.403 "data_offset": 2048, 00:13:52.403 "data_size": 63488 00:13:52.403 }, 00:13:52.403 { 00:13:52.403 "name": "BaseBdev3", 00:13:52.403 "uuid": "e83541d8-dd8f-58ef-aebc-987b120bfb12", 00:13:52.403 "is_configured": true, 00:13:52.403 "data_offset": 2048, 00:13:52.403 "data_size": 63488 00:13:52.403 } 00:13:52.403 ] 00:13:52.403 }' 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.403 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.971 [2024-11-20 11:26:00.719301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:52.971 [2024-11-20 11:26:00.719340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.971 [2024-11-20 11:26:00.722750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.971 [2024-11-20 11:26:00.722851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.971 [2024-11-20 11:26:00.723012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.971 [2024-11-20 11:26:00.723030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:52.971 { 00:13:52.971 "results": [ 00:13:52.971 { 00:13:52.971 "job": "raid_bdev1", 00:13:52.971 "core_mask": "0x1", 00:13:52.971 "workload": "randrw", 00:13:52.971 "percentage": 50, 00:13:52.971 "status": "finished", 00:13:52.971 "queue_depth": 1, 00:13:52.971 "io_size": 131072, 00:13:52.971 "runtime": 1.414744, 00:13:52.971 "iops": 9393.21884383323, 00:13:52.971 "mibps": 1174.1523554791538, 00:13:52.971 "io_failed": 0, 00:13:52.971 "io_timeout": 0, 00:13:52.971 "avg_latency_us": 102.21266584119469, 00:13:52.971 "min_latency_us": 43.52, 00:13:52.971 "max_latency_us": 1936.290909090909 00:13:52.971 } 00:13:52.971 ], 00:13:52.971 "core_count": 1 00:13:52.971 } 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69095 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69095 ']' 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69095 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69095 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.971 killing process with pid 69095 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69095' 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69095 00:13:52.971 [2024-11-20 11:26:00.757005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.971 11:26:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69095 00:13:53.231 [2024-11-20 11:26:00.966176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gd1V3tVFja 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:54.607 00:13:54.607 real 0m4.755s 00:13:54.607 user 0m5.907s 00:13:54.607 sys 0m0.594s 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.607 11:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.607 ************************************ 00:13:54.607 END TEST raid_read_error_test 00:13:54.607 ************************************ 00:13:54.607 11:26:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:54.607 11:26:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:54.607 11:26:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.607 11:26:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.607 ************************************ 00:13:54.607 START TEST raid_write_error_test 00:13:54.607 ************************************ 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0KEXWj4txn 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69235 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69235 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69235 ']' 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.607 11:26:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.607 [2024-11-20 11:26:02.249319] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:54.607 [2024-11-20 11:26:02.249497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69235 ] 00:13:54.607 [2024-11-20 11:26:02.433669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.866 [2024-11-20 11:26:02.565501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.125 [2024-11-20 11:26:02.782511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.126 [2024-11-20 11:26:02.782599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.693 BaseBdev1_malloc 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.693 true 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.693 [2024-11-20 11:26:03.379595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:55.693 [2024-11-20 11:26:03.379691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.693 [2024-11-20 11:26:03.379722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:55.693 [2024-11-20 11:26:03.379742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.693 [2024-11-20 11:26:03.382647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.693 [2024-11-20 11:26:03.382710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.693 BaseBdev1 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.693 BaseBdev2_malloc 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.693 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 true 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 [2024-11-20 11:26:03.444357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:55.694 [2024-11-20 11:26:03.444434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.694 [2024-11-20 11:26:03.444461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:55.694 [2024-11-20 11:26:03.444479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.694 [2024-11-20 11:26:03.447278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.694 [2024-11-20 11:26:03.447329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.694 BaseBdev2 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 BaseBdev3_malloc 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 true 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 [2024-11-20 11:26:03.516435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:55.694 [2024-11-20 11:26:03.516504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.694 [2024-11-20 11:26:03.516532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:55.694 [2024-11-20 11:26:03.516551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.694 [2024-11-20 11:26:03.519327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.694 [2024-11-20 11:26:03.519378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:55.694 BaseBdev3 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 [2024-11-20 11:26:03.524528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.694 [2024-11-20 11:26:03.527010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.694 [2024-11-20 11:26:03.527122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.694 [2024-11-20 11:26:03.527390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:55.694 [2024-11-20 11:26:03.527419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:55.694 [2024-11-20 11:26:03.527746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:55.694 [2024-11-20 11:26:03.527995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:55.694 [2024-11-20 11:26:03.528026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:55.694 [2024-11-20 11:26:03.528219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.953 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.953 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.953 "name": "raid_bdev1", 00:13:55.953 "uuid": "957a590a-69d7-48d7-93cf-51f3696e160d", 00:13:55.953 "strip_size_kb": 0, 00:13:55.953 "state": "online", 00:13:55.953 "raid_level": "raid1", 00:13:55.953 "superblock": true, 00:13:55.953 "num_base_bdevs": 3, 00:13:55.953 "num_base_bdevs_discovered": 3, 00:13:55.953 "num_base_bdevs_operational": 3, 00:13:55.953 "base_bdevs_list": [ 00:13:55.953 { 00:13:55.953 "name": "BaseBdev1", 00:13:55.953 "uuid": "02d4d4a8-dd6c-506e-bdd8-2d7cba5becfe", 00:13:55.953 "is_configured": true, 00:13:55.953 "data_offset": 2048, 00:13:55.953 "data_size": 63488 00:13:55.953 }, 00:13:55.953 { 00:13:55.953 "name": "BaseBdev2", 00:13:55.953 "uuid": "38d1c642-64b0-5e6f-85e3-ea52cf4cb578", 00:13:55.953 "is_configured": true, 00:13:55.953 "data_offset": 2048, 00:13:55.953 "data_size": 63488 00:13:55.953 }, 00:13:55.953 { 00:13:55.953 "name": "BaseBdev3", 00:13:55.953 "uuid": "dc3ee654-28b1-5dd5-b876-92b8d06f24e1", 00:13:55.953 "is_configured": true, 00:13:55.953 "data_offset": 2048, 00:13:55.953 "data_size": 63488 00:13:55.953 } 00:13:55.953 ] 00:13:55.953 }' 00:13:55.953 11:26:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.953 11:26:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.212 11:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:56.212 11:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:56.471 [2024-11-20 11:26:04.166168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:57.406 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:57.406 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.406 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.406 [2024-11-20 11:26:05.051424] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:57.406 [2024-11-20 11:26:05.051486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.406 [2024-11-20 11:26:05.051744] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:57.406 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.407 "name": "raid_bdev1", 00:13:57.407 "uuid": "957a590a-69d7-48d7-93cf-51f3696e160d", 00:13:57.407 "strip_size_kb": 0, 00:13:57.407 "state": "online", 00:13:57.407 "raid_level": "raid1", 00:13:57.407 "superblock": true, 00:13:57.407 "num_base_bdevs": 3, 00:13:57.407 "num_base_bdevs_discovered": 2, 00:13:57.407 "num_base_bdevs_operational": 2, 00:13:57.407 "base_bdevs_list": [ 00:13:57.407 { 00:13:57.407 "name": null, 00:13:57.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.407 "is_configured": false, 00:13:57.407 "data_offset": 0, 00:13:57.407 "data_size": 63488 00:13:57.407 }, 00:13:57.407 { 00:13:57.407 "name": "BaseBdev2", 00:13:57.407 "uuid": "38d1c642-64b0-5e6f-85e3-ea52cf4cb578", 00:13:57.407 "is_configured": true, 00:13:57.407 "data_offset": 2048, 00:13:57.407 "data_size": 63488 00:13:57.407 }, 00:13:57.407 { 00:13:57.407 "name": "BaseBdev3", 00:13:57.407 "uuid": "dc3ee654-28b1-5dd5-b876-92b8d06f24e1", 00:13:57.407 "is_configured": true, 00:13:57.407 "data_offset": 2048, 00:13:57.407 "data_size": 63488 00:13:57.407 } 00:13:57.407 ] 00:13:57.407 }' 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.407 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.975 [2024-11-20 11:26:05.576275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.975 [2024-11-20 11:26:05.576320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.975 [2024-11-20 11:26:05.579596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.975 [2024-11-20 11:26:05.579686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.975 [2024-11-20 11:26:05.579801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.975 [2024-11-20 11:26:05.579826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:57.975 { 00:13:57.975 "results": [ 00:13:57.975 { 00:13:57.975 "job": "raid_bdev1", 00:13:57.975 "core_mask": "0x1", 00:13:57.975 "workload": "randrw", 00:13:57.975 "percentage": 50, 00:13:57.975 "status": "finished", 00:13:57.975 "queue_depth": 1, 00:13:57.975 "io_size": 131072, 00:13:57.975 "runtime": 1.407564, 00:13:57.975 "iops": 10481.228562253653, 00:13:57.975 "mibps": 1310.1535702817066, 00:13:57.975 "io_failed": 0, 00:13:57.975 "io_timeout": 0, 00:13:57.975 "avg_latency_us": 91.11592957980811, 00:13:57.975 "min_latency_us": 42.35636363636364, 00:13:57.975 "max_latency_us": 1846.9236363636364 00:13:57.975 } 00:13:57.975 ], 00:13:57.975 "core_count": 1 00:13:57.975 } 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69235 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69235 ']' 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69235 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69235 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.975 killing process with pid 69235 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69235' 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69235 00:13:57.975 [2024-11-20 11:26:05.614701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.975 11:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69235 00:13:58.234 [2024-11-20 11:26:05.832442] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0KEXWj4txn 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:59.170 00:13:59.170 real 0m4.815s 00:13:59.170 user 0m6.030s 00:13:59.170 sys 0m0.580s 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.170 ************************************ 00:13:59.170 END TEST raid_write_error_test 00:13:59.170 ************************************ 00:13:59.170 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.170 11:26:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:59.170 11:26:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:59.170 11:26:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:59.170 11:26:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:59.170 11:26:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.170 11:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.170 ************************************ 00:13:59.170 START TEST raid_state_function_test 00:13:59.170 ************************************ 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:59.170 11:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69384 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69384' 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:59.170 Process raid pid: 69384 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69384 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69384 ']' 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.170 11:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.428 [2024-11-20 11:26:07.117433] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:13:59.428 [2024-11-20 11:26:07.117651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.686 [2024-11-20 11:26:07.306906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.687 [2024-11-20 11:26:07.464505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.945 [2024-11-20 11:26:07.690545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.945 [2024-11-20 11:26:07.690601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.510 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.510 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:00.510 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:00.510 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.510 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.510 [2024-11-20 11:26:08.130226] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.511 [2024-11-20 11:26:08.130296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.511 [2024-11-20 11:26:08.130314] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.511 [2024-11-20 11:26:08.130331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.511 [2024-11-20 11:26:08.130341] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.511 [2024-11-20 11:26:08.130355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.511 [2024-11-20 11:26:08.130365] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:00.511 [2024-11-20 11:26:08.130379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.511 "name": "Existed_Raid", 00:14:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.511 "strip_size_kb": 64, 00:14:00.511 "state": "configuring", 00:14:00.511 "raid_level": "raid0", 00:14:00.511 "superblock": false, 00:14:00.511 "num_base_bdevs": 4, 00:14:00.511 "num_base_bdevs_discovered": 0, 00:14:00.511 "num_base_bdevs_operational": 4, 00:14:00.511 "base_bdevs_list": [ 00:14:00.511 { 00:14:00.511 "name": "BaseBdev1", 00:14:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.511 "is_configured": false, 00:14:00.511 "data_offset": 0, 00:14:00.511 "data_size": 0 00:14:00.511 }, 00:14:00.511 { 00:14:00.511 "name": "BaseBdev2", 00:14:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.511 "is_configured": false, 00:14:00.511 "data_offset": 0, 00:14:00.511 "data_size": 0 00:14:00.511 }, 00:14:00.511 { 00:14:00.511 "name": "BaseBdev3", 00:14:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.511 "is_configured": false, 00:14:00.511 "data_offset": 0, 00:14:00.511 "data_size": 0 00:14:00.511 }, 00:14:00.511 { 00:14:00.511 "name": "BaseBdev4", 00:14:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.511 "is_configured": false, 00:14:00.511 "data_offset": 0, 00:14:00.511 "data_size": 0 00:14:00.511 } 00:14:00.511 ] 00:14:00.511 }' 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.511 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.078 [2024-11-20 11:26:08.630327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.078 [2024-11-20 11:26:08.630522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.078 [2024-11-20 11:26:08.638371] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.078 [2024-11-20 11:26:08.638448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.078 [2024-11-20 11:26:08.638477] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.078 [2024-11-20 11:26:08.638509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.078 [2024-11-20 11:26:08.638529] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.078 [2024-11-20 11:26:08.638558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.078 [2024-11-20 11:26:08.638578] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:01.078 [2024-11-20 11:26:08.638606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.078 [2024-11-20 11:26:08.686976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.078 BaseBdev1 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.078 [ 00:14:01.078 { 00:14:01.078 "name": "BaseBdev1", 00:14:01.078 "aliases": [ 00:14:01.078 "3dc982dc-9963-4c69-aab6-10a83524e78b" 00:14:01.078 ], 00:14:01.078 "product_name": "Malloc disk", 00:14:01.078 "block_size": 512, 00:14:01.078 "num_blocks": 65536, 00:14:01.078 "uuid": "3dc982dc-9963-4c69-aab6-10a83524e78b", 00:14:01.078 "assigned_rate_limits": { 00:14:01.078 "rw_ios_per_sec": 0, 00:14:01.078 "rw_mbytes_per_sec": 0, 00:14:01.078 "r_mbytes_per_sec": 0, 00:14:01.078 "w_mbytes_per_sec": 0 00:14:01.078 }, 00:14:01.078 "claimed": true, 00:14:01.078 "claim_type": "exclusive_write", 00:14:01.078 "zoned": false, 00:14:01.078 "supported_io_types": { 00:14:01.078 "read": true, 00:14:01.078 "write": true, 00:14:01.078 "unmap": true, 00:14:01.078 "flush": true, 00:14:01.078 "reset": true, 00:14:01.078 "nvme_admin": false, 00:14:01.078 "nvme_io": false, 00:14:01.078 "nvme_io_md": false, 00:14:01.078 "write_zeroes": true, 00:14:01.078 "zcopy": true, 00:14:01.078 "get_zone_info": false, 00:14:01.078 "zone_management": false, 00:14:01.078 "zone_append": false, 00:14:01.078 "compare": false, 00:14:01.078 "compare_and_write": false, 00:14:01.078 "abort": true, 00:14:01.078 "seek_hole": false, 00:14:01.078 "seek_data": false, 00:14:01.078 "copy": true, 00:14:01.078 "nvme_iov_md": false 00:14:01.078 }, 00:14:01.078 "memory_domains": [ 00:14:01.078 { 00:14:01.078 "dma_device_id": "system", 00:14:01.078 "dma_device_type": 1 00:14:01.078 }, 00:14:01.078 { 00:14:01.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.078 "dma_device_type": 2 00:14:01.078 } 00:14:01.078 ], 00:14:01.078 "driver_specific": {} 00:14:01.078 } 00:14:01.078 ] 00:14:01.078 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.079 "name": "Existed_Raid", 00:14:01.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.079 "strip_size_kb": 64, 00:14:01.079 "state": "configuring", 00:14:01.079 "raid_level": "raid0", 00:14:01.079 "superblock": false, 00:14:01.079 "num_base_bdevs": 4, 00:14:01.079 "num_base_bdevs_discovered": 1, 00:14:01.079 "num_base_bdevs_operational": 4, 00:14:01.079 "base_bdevs_list": [ 00:14:01.079 { 00:14:01.079 "name": "BaseBdev1", 00:14:01.079 "uuid": "3dc982dc-9963-4c69-aab6-10a83524e78b", 00:14:01.079 "is_configured": true, 00:14:01.079 "data_offset": 0, 00:14:01.079 "data_size": 65536 00:14:01.079 }, 00:14:01.079 { 00:14:01.079 "name": "BaseBdev2", 00:14:01.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.079 "is_configured": false, 00:14:01.079 "data_offset": 0, 00:14:01.079 "data_size": 0 00:14:01.079 }, 00:14:01.079 { 00:14:01.079 "name": "BaseBdev3", 00:14:01.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.079 "is_configured": false, 00:14:01.079 "data_offset": 0, 00:14:01.079 "data_size": 0 00:14:01.079 }, 00:14:01.079 { 00:14:01.079 "name": "BaseBdev4", 00:14:01.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.079 "is_configured": false, 00:14:01.079 "data_offset": 0, 00:14:01.079 "data_size": 0 00:14:01.079 } 00:14:01.079 ] 00:14:01.079 }' 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.079 11:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.646 [2024-11-20 11:26:09.227231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.646 [2024-11-20 11:26:09.227471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.646 [2024-11-20 11:26:09.239275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.646 [2024-11-20 11:26:09.241821] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.646 [2024-11-20 11:26:09.242005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.646 [2024-11-20 11:26:09.242034] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.646 [2024-11-20 11:26:09.242054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.646 [2024-11-20 11:26:09.242065] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:01.646 [2024-11-20 11:26:09.242078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.646 "name": "Existed_Raid", 00:14:01.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.646 "strip_size_kb": 64, 00:14:01.646 "state": "configuring", 00:14:01.646 "raid_level": "raid0", 00:14:01.646 "superblock": false, 00:14:01.646 "num_base_bdevs": 4, 00:14:01.646 "num_base_bdevs_discovered": 1, 00:14:01.646 "num_base_bdevs_operational": 4, 00:14:01.646 "base_bdevs_list": [ 00:14:01.646 { 00:14:01.646 "name": "BaseBdev1", 00:14:01.646 "uuid": "3dc982dc-9963-4c69-aab6-10a83524e78b", 00:14:01.646 "is_configured": true, 00:14:01.646 "data_offset": 0, 00:14:01.646 "data_size": 65536 00:14:01.646 }, 00:14:01.646 { 00:14:01.646 "name": "BaseBdev2", 00:14:01.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.646 "is_configured": false, 00:14:01.646 "data_offset": 0, 00:14:01.646 "data_size": 0 00:14:01.646 }, 00:14:01.646 { 00:14:01.646 "name": "BaseBdev3", 00:14:01.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.646 "is_configured": false, 00:14:01.646 "data_offset": 0, 00:14:01.646 "data_size": 0 00:14:01.646 }, 00:14:01.646 { 00:14:01.646 "name": "BaseBdev4", 00:14:01.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.646 "is_configured": false, 00:14:01.646 "data_offset": 0, 00:14:01.646 "data_size": 0 00:14:01.646 } 00:14:01.646 ] 00:14:01.646 }' 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.646 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.904 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:01.905 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.905 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.163 [2024-11-20 11:26:09.781950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.164 BaseBdev2 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.164 [ 00:14:02.164 { 00:14:02.164 "name": "BaseBdev2", 00:14:02.164 "aliases": [ 00:14:02.164 "0b48f7b1-f9f3-4379-b6f4-f699df489880" 00:14:02.164 ], 00:14:02.164 "product_name": "Malloc disk", 00:14:02.164 "block_size": 512, 00:14:02.164 "num_blocks": 65536, 00:14:02.164 "uuid": "0b48f7b1-f9f3-4379-b6f4-f699df489880", 00:14:02.164 "assigned_rate_limits": { 00:14:02.164 "rw_ios_per_sec": 0, 00:14:02.164 "rw_mbytes_per_sec": 0, 00:14:02.164 "r_mbytes_per_sec": 0, 00:14:02.164 "w_mbytes_per_sec": 0 00:14:02.164 }, 00:14:02.164 "claimed": true, 00:14:02.164 "claim_type": "exclusive_write", 00:14:02.164 "zoned": false, 00:14:02.164 "supported_io_types": { 00:14:02.164 "read": true, 00:14:02.164 "write": true, 00:14:02.164 "unmap": true, 00:14:02.164 "flush": true, 00:14:02.164 "reset": true, 00:14:02.164 "nvme_admin": false, 00:14:02.164 "nvme_io": false, 00:14:02.164 "nvme_io_md": false, 00:14:02.164 "write_zeroes": true, 00:14:02.164 "zcopy": true, 00:14:02.164 "get_zone_info": false, 00:14:02.164 "zone_management": false, 00:14:02.164 "zone_append": false, 00:14:02.164 "compare": false, 00:14:02.164 "compare_and_write": false, 00:14:02.164 "abort": true, 00:14:02.164 "seek_hole": false, 00:14:02.164 "seek_data": false, 00:14:02.164 "copy": true, 00:14:02.164 "nvme_iov_md": false 00:14:02.164 }, 00:14:02.164 "memory_domains": [ 00:14:02.164 { 00:14:02.164 "dma_device_id": "system", 00:14:02.164 "dma_device_type": 1 00:14:02.164 }, 00:14:02.164 { 00:14:02.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.164 "dma_device_type": 2 00:14:02.164 } 00:14:02.164 ], 00:14:02.164 "driver_specific": {} 00:14:02.164 } 00:14:02.164 ] 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.164 "name": "Existed_Raid", 00:14:02.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.164 "strip_size_kb": 64, 00:14:02.164 "state": "configuring", 00:14:02.164 "raid_level": "raid0", 00:14:02.164 "superblock": false, 00:14:02.164 "num_base_bdevs": 4, 00:14:02.164 "num_base_bdevs_discovered": 2, 00:14:02.164 "num_base_bdevs_operational": 4, 00:14:02.164 "base_bdevs_list": [ 00:14:02.164 { 00:14:02.164 "name": "BaseBdev1", 00:14:02.164 "uuid": "3dc982dc-9963-4c69-aab6-10a83524e78b", 00:14:02.164 "is_configured": true, 00:14:02.164 "data_offset": 0, 00:14:02.164 "data_size": 65536 00:14:02.164 }, 00:14:02.164 { 00:14:02.164 "name": "BaseBdev2", 00:14:02.164 "uuid": "0b48f7b1-f9f3-4379-b6f4-f699df489880", 00:14:02.164 "is_configured": true, 00:14:02.164 "data_offset": 0, 00:14:02.164 "data_size": 65536 00:14:02.164 }, 00:14:02.164 { 00:14:02.164 "name": "BaseBdev3", 00:14:02.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.164 "is_configured": false, 00:14:02.164 "data_offset": 0, 00:14:02.164 "data_size": 0 00:14:02.164 }, 00:14:02.164 { 00:14:02.164 "name": "BaseBdev4", 00:14:02.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.164 "is_configured": false, 00:14:02.164 "data_offset": 0, 00:14:02.164 "data_size": 0 00:14:02.164 } 00:14:02.164 ] 00:14:02.164 }' 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.164 11:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 [2024-11-20 11:26:10.377744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.732 BaseBdev3 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 [ 00:14:02.732 { 00:14:02.732 "name": "BaseBdev3", 00:14:02.732 "aliases": [ 00:14:02.732 "45cf46ce-eb16-447b-b22d-0262b1993bac" 00:14:02.732 ], 00:14:02.732 "product_name": "Malloc disk", 00:14:02.732 "block_size": 512, 00:14:02.732 "num_blocks": 65536, 00:14:02.732 "uuid": "45cf46ce-eb16-447b-b22d-0262b1993bac", 00:14:02.732 "assigned_rate_limits": { 00:14:02.732 "rw_ios_per_sec": 0, 00:14:02.732 "rw_mbytes_per_sec": 0, 00:14:02.732 "r_mbytes_per_sec": 0, 00:14:02.732 "w_mbytes_per_sec": 0 00:14:02.732 }, 00:14:02.732 "claimed": true, 00:14:02.732 "claim_type": "exclusive_write", 00:14:02.732 "zoned": false, 00:14:02.732 "supported_io_types": { 00:14:02.732 "read": true, 00:14:02.732 "write": true, 00:14:02.732 "unmap": true, 00:14:02.732 "flush": true, 00:14:02.732 "reset": true, 00:14:02.732 "nvme_admin": false, 00:14:02.732 "nvme_io": false, 00:14:02.732 "nvme_io_md": false, 00:14:02.732 "write_zeroes": true, 00:14:02.732 "zcopy": true, 00:14:02.732 "get_zone_info": false, 00:14:02.732 "zone_management": false, 00:14:02.732 "zone_append": false, 00:14:02.732 "compare": false, 00:14:02.732 "compare_and_write": false, 00:14:02.732 "abort": true, 00:14:02.732 "seek_hole": false, 00:14:02.732 "seek_data": false, 00:14:02.732 "copy": true, 00:14:02.732 "nvme_iov_md": false 00:14:02.732 }, 00:14:02.732 "memory_domains": [ 00:14:02.732 { 00:14:02.732 "dma_device_id": "system", 00:14:02.732 "dma_device_type": 1 00:14:02.732 }, 00:14:02.732 { 00:14:02.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.732 "dma_device_type": 2 00:14:02.732 } 00:14:02.732 ], 00:14:02.732 "driver_specific": {} 00:14:02.732 } 00:14:02.732 ] 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.732 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.732 "name": "Existed_Raid", 00:14:02.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.732 "strip_size_kb": 64, 00:14:02.732 "state": "configuring", 00:14:02.732 "raid_level": "raid0", 00:14:02.732 "superblock": false, 00:14:02.732 "num_base_bdevs": 4, 00:14:02.732 "num_base_bdevs_discovered": 3, 00:14:02.732 "num_base_bdevs_operational": 4, 00:14:02.732 "base_bdevs_list": [ 00:14:02.732 { 00:14:02.732 "name": "BaseBdev1", 00:14:02.733 "uuid": "3dc982dc-9963-4c69-aab6-10a83524e78b", 00:14:02.733 "is_configured": true, 00:14:02.733 "data_offset": 0, 00:14:02.733 "data_size": 65536 00:14:02.733 }, 00:14:02.733 { 00:14:02.733 "name": "BaseBdev2", 00:14:02.733 "uuid": "0b48f7b1-f9f3-4379-b6f4-f699df489880", 00:14:02.733 "is_configured": true, 00:14:02.733 "data_offset": 0, 00:14:02.733 "data_size": 65536 00:14:02.733 }, 00:14:02.733 { 00:14:02.733 "name": "BaseBdev3", 00:14:02.733 "uuid": "45cf46ce-eb16-447b-b22d-0262b1993bac", 00:14:02.733 "is_configured": true, 00:14:02.733 "data_offset": 0, 00:14:02.733 "data_size": 65536 00:14:02.733 }, 00:14:02.733 { 00:14:02.733 "name": "BaseBdev4", 00:14:02.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.733 "is_configured": false, 00:14:02.733 "data_offset": 0, 00:14:02.733 "data_size": 0 00:14:02.733 } 00:14:02.733 ] 00:14:02.733 }' 00:14:02.733 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.733 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.299 [2024-11-20 11:26:10.968661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:03.299 [2024-11-20 11:26:10.968721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:03.299 [2024-11-20 11:26:10.968736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:03.299 [2024-11-20 11:26:10.969082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:03.299 [2024-11-20 11:26:10.969314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:03.299 [2024-11-20 11:26:10.969346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:03.299 [2024-11-20 11:26:10.969703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.299 BaseBdev4 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.299 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.300 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.300 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.300 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.300 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:03.300 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.300 11:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.300 [ 00:14:03.300 { 00:14:03.300 "name": "BaseBdev4", 00:14:03.300 "aliases": [ 00:14:03.300 "3b454aee-fc02-41be-aa1d-1d0da88756f0" 00:14:03.300 ], 00:14:03.300 "product_name": "Malloc disk", 00:14:03.300 "block_size": 512, 00:14:03.300 "num_blocks": 65536, 00:14:03.300 "uuid": "3b454aee-fc02-41be-aa1d-1d0da88756f0", 00:14:03.300 "assigned_rate_limits": { 00:14:03.300 "rw_ios_per_sec": 0, 00:14:03.300 "rw_mbytes_per_sec": 0, 00:14:03.300 "r_mbytes_per_sec": 0, 00:14:03.300 "w_mbytes_per_sec": 0 00:14:03.300 }, 00:14:03.300 "claimed": true, 00:14:03.300 "claim_type": "exclusive_write", 00:14:03.300 "zoned": false, 00:14:03.300 "supported_io_types": { 00:14:03.300 "read": true, 00:14:03.300 "write": true, 00:14:03.300 "unmap": true, 00:14:03.300 "flush": true, 00:14:03.300 "reset": true, 00:14:03.300 "nvme_admin": false, 00:14:03.300 "nvme_io": false, 00:14:03.300 "nvme_io_md": false, 00:14:03.300 "write_zeroes": true, 00:14:03.300 "zcopy": true, 00:14:03.300 "get_zone_info": false, 00:14:03.300 "zone_management": false, 00:14:03.300 "zone_append": false, 00:14:03.300 "compare": false, 00:14:03.300 "compare_and_write": false, 00:14:03.300 "abort": true, 00:14:03.300 "seek_hole": false, 00:14:03.300 "seek_data": false, 00:14:03.300 "copy": true, 00:14:03.300 "nvme_iov_md": false 00:14:03.300 }, 00:14:03.300 "memory_domains": [ 00:14:03.300 { 00:14:03.300 "dma_device_id": "system", 00:14:03.300 "dma_device_type": 1 00:14:03.300 }, 00:14:03.300 { 00:14:03.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.300 "dma_device_type": 2 00:14:03.300 } 00:14:03.300 ], 00:14:03.300 "driver_specific": {} 00:14:03.300 } 00:14:03.300 ] 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.300 "name": "Existed_Raid", 00:14:03.300 "uuid": "2c0a4417-0b23-45ab-9872-6281edb0820d", 00:14:03.300 "strip_size_kb": 64, 00:14:03.300 "state": "online", 00:14:03.300 "raid_level": "raid0", 00:14:03.300 "superblock": false, 00:14:03.300 "num_base_bdevs": 4, 00:14:03.300 "num_base_bdevs_discovered": 4, 00:14:03.300 "num_base_bdevs_operational": 4, 00:14:03.300 "base_bdevs_list": [ 00:14:03.300 { 00:14:03.300 "name": "BaseBdev1", 00:14:03.300 "uuid": "3dc982dc-9963-4c69-aab6-10a83524e78b", 00:14:03.300 "is_configured": true, 00:14:03.300 "data_offset": 0, 00:14:03.300 "data_size": 65536 00:14:03.300 }, 00:14:03.300 { 00:14:03.300 "name": "BaseBdev2", 00:14:03.300 "uuid": "0b48f7b1-f9f3-4379-b6f4-f699df489880", 00:14:03.300 "is_configured": true, 00:14:03.300 "data_offset": 0, 00:14:03.300 "data_size": 65536 00:14:03.300 }, 00:14:03.300 { 00:14:03.300 "name": "BaseBdev3", 00:14:03.300 "uuid": "45cf46ce-eb16-447b-b22d-0262b1993bac", 00:14:03.300 "is_configured": true, 00:14:03.300 "data_offset": 0, 00:14:03.300 "data_size": 65536 00:14:03.300 }, 00:14:03.300 { 00:14:03.300 "name": "BaseBdev4", 00:14:03.300 "uuid": "3b454aee-fc02-41be-aa1d-1d0da88756f0", 00:14:03.300 "is_configured": true, 00:14:03.300 "data_offset": 0, 00:14:03.300 "data_size": 65536 00:14:03.300 } 00:14:03.300 ] 00:14:03.300 }' 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.300 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.916 [2024-11-20 11:26:11.521302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.916 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.916 "name": "Existed_Raid", 00:14:03.916 "aliases": [ 00:14:03.916 "2c0a4417-0b23-45ab-9872-6281edb0820d" 00:14:03.916 ], 00:14:03.916 "product_name": "Raid Volume", 00:14:03.916 "block_size": 512, 00:14:03.916 "num_blocks": 262144, 00:14:03.916 "uuid": "2c0a4417-0b23-45ab-9872-6281edb0820d", 00:14:03.916 "assigned_rate_limits": { 00:14:03.916 "rw_ios_per_sec": 0, 00:14:03.916 "rw_mbytes_per_sec": 0, 00:14:03.916 "r_mbytes_per_sec": 0, 00:14:03.916 "w_mbytes_per_sec": 0 00:14:03.916 }, 00:14:03.916 "claimed": false, 00:14:03.916 "zoned": false, 00:14:03.916 "supported_io_types": { 00:14:03.916 "read": true, 00:14:03.916 "write": true, 00:14:03.916 "unmap": true, 00:14:03.916 "flush": true, 00:14:03.916 "reset": true, 00:14:03.917 "nvme_admin": false, 00:14:03.917 "nvme_io": false, 00:14:03.917 "nvme_io_md": false, 00:14:03.917 "write_zeroes": true, 00:14:03.917 "zcopy": false, 00:14:03.917 "get_zone_info": false, 00:14:03.917 "zone_management": false, 00:14:03.917 "zone_append": false, 00:14:03.917 "compare": false, 00:14:03.917 "compare_and_write": false, 00:14:03.917 "abort": false, 00:14:03.917 "seek_hole": false, 00:14:03.917 "seek_data": false, 00:14:03.917 "copy": false, 00:14:03.917 "nvme_iov_md": false 00:14:03.917 }, 00:14:03.917 "memory_domains": [ 00:14:03.917 { 00:14:03.917 "dma_device_id": "system", 00:14:03.917 "dma_device_type": 1 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.917 "dma_device_type": 2 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "dma_device_id": "system", 00:14:03.917 "dma_device_type": 1 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.917 "dma_device_type": 2 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "dma_device_id": "system", 00:14:03.917 "dma_device_type": 1 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.917 "dma_device_type": 2 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "dma_device_id": "system", 00:14:03.917 "dma_device_type": 1 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.917 "dma_device_type": 2 00:14:03.917 } 00:14:03.917 ], 00:14:03.917 "driver_specific": { 00:14:03.917 "raid": { 00:14:03.917 "uuid": "2c0a4417-0b23-45ab-9872-6281edb0820d", 00:14:03.917 "strip_size_kb": 64, 00:14:03.917 "state": "online", 00:14:03.917 "raid_level": "raid0", 00:14:03.917 "superblock": false, 00:14:03.917 "num_base_bdevs": 4, 00:14:03.917 "num_base_bdevs_discovered": 4, 00:14:03.917 "num_base_bdevs_operational": 4, 00:14:03.917 "base_bdevs_list": [ 00:14:03.917 { 00:14:03.917 "name": "BaseBdev1", 00:14:03.917 "uuid": "3dc982dc-9963-4c69-aab6-10a83524e78b", 00:14:03.917 "is_configured": true, 00:14:03.917 "data_offset": 0, 00:14:03.917 "data_size": 65536 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "name": "BaseBdev2", 00:14:03.917 "uuid": "0b48f7b1-f9f3-4379-b6f4-f699df489880", 00:14:03.917 "is_configured": true, 00:14:03.917 "data_offset": 0, 00:14:03.917 "data_size": 65536 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "name": "BaseBdev3", 00:14:03.917 "uuid": "45cf46ce-eb16-447b-b22d-0262b1993bac", 00:14:03.917 "is_configured": true, 00:14:03.917 "data_offset": 0, 00:14:03.917 "data_size": 65536 00:14:03.917 }, 00:14:03.917 { 00:14:03.917 "name": "BaseBdev4", 00:14:03.917 "uuid": "3b454aee-fc02-41be-aa1d-1d0da88756f0", 00:14:03.917 "is_configured": true, 00:14:03.917 "data_offset": 0, 00:14:03.917 "data_size": 65536 00:14:03.917 } 00:14:03.917 ] 00:14:03.917 } 00:14:03.917 } 00:14:03.917 }' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:03.917 BaseBdev2 00:14:03.917 BaseBdev3 00:14:03.917 BaseBdev4' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.917 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.176 [2024-11-20 11:26:11.869085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.176 [2024-11-20 11:26:11.869140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.176 [2024-11-20 11:26:11.869209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.176 11:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.176 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.176 "name": "Existed_Raid", 00:14:04.176 "uuid": "2c0a4417-0b23-45ab-9872-6281edb0820d", 00:14:04.176 "strip_size_kb": 64, 00:14:04.176 "state": "offline", 00:14:04.176 "raid_level": "raid0", 00:14:04.176 "superblock": false, 00:14:04.176 "num_base_bdevs": 4, 00:14:04.176 "num_base_bdevs_discovered": 3, 00:14:04.176 "num_base_bdevs_operational": 3, 00:14:04.176 "base_bdevs_list": [ 00:14:04.176 { 00:14:04.176 "name": null, 00:14:04.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.176 "is_configured": false, 00:14:04.176 "data_offset": 0, 00:14:04.176 "data_size": 65536 00:14:04.176 }, 00:14:04.176 { 00:14:04.176 "name": "BaseBdev2", 00:14:04.176 "uuid": "0b48f7b1-f9f3-4379-b6f4-f699df489880", 00:14:04.176 "is_configured": true, 00:14:04.176 "data_offset": 0, 00:14:04.176 "data_size": 65536 00:14:04.176 }, 00:14:04.176 { 00:14:04.176 "name": "BaseBdev3", 00:14:04.176 "uuid": "45cf46ce-eb16-447b-b22d-0262b1993bac", 00:14:04.176 "is_configured": true, 00:14:04.176 "data_offset": 0, 00:14:04.176 "data_size": 65536 00:14:04.176 }, 00:14:04.176 { 00:14:04.177 "name": "BaseBdev4", 00:14:04.177 "uuid": "3b454aee-fc02-41be-aa1d-1d0da88756f0", 00:14:04.177 "is_configured": true, 00:14:04.177 "data_offset": 0, 00:14:04.177 "data_size": 65536 00:14:04.177 } 00:14:04.177 ] 00:14:04.177 }' 00:14:04.177 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.177 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.745 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.745 [2024-11-20 11:26:12.544961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.003 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.004 [2024-11-20 11:26:12.691546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.004 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.004 [2024-11-20 11:26:12.834466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:05.004 [2024-11-20 11:26:12.834673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.263 11:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.263 BaseBdev2 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.263 [ 00:14:05.263 { 00:14:05.263 "name": "BaseBdev2", 00:14:05.263 "aliases": [ 00:14:05.263 "b67140ed-5ad3-4183-b173-b48fcf33117c" 00:14:05.263 ], 00:14:05.263 "product_name": "Malloc disk", 00:14:05.263 "block_size": 512, 00:14:05.263 "num_blocks": 65536, 00:14:05.263 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:05.263 "assigned_rate_limits": { 00:14:05.263 "rw_ios_per_sec": 0, 00:14:05.263 "rw_mbytes_per_sec": 0, 00:14:05.263 "r_mbytes_per_sec": 0, 00:14:05.263 "w_mbytes_per_sec": 0 00:14:05.263 }, 00:14:05.263 "claimed": false, 00:14:05.263 "zoned": false, 00:14:05.263 "supported_io_types": { 00:14:05.263 "read": true, 00:14:05.263 "write": true, 00:14:05.263 "unmap": true, 00:14:05.263 "flush": true, 00:14:05.263 "reset": true, 00:14:05.263 "nvme_admin": false, 00:14:05.263 "nvme_io": false, 00:14:05.263 "nvme_io_md": false, 00:14:05.263 "write_zeroes": true, 00:14:05.263 "zcopy": true, 00:14:05.263 "get_zone_info": false, 00:14:05.263 "zone_management": false, 00:14:05.263 "zone_append": false, 00:14:05.263 "compare": false, 00:14:05.263 "compare_and_write": false, 00:14:05.263 "abort": true, 00:14:05.263 "seek_hole": false, 00:14:05.263 "seek_data": false, 00:14:05.263 "copy": true, 00:14:05.263 "nvme_iov_md": false 00:14:05.263 }, 00:14:05.263 "memory_domains": [ 00:14:05.263 { 00:14:05.263 "dma_device_id": "system", 00:14:05.263 "dma_device_type": 1 00:14:05.263 }, 00:14:05.263 { 00:14:05.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.263 "dma_device_type": 2 00:14:05.263 } 00:14:05.263 ], 00:14:05.263 "driver_specific": {} 00:14:05.263 } 00:14:05.263 ] 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.263 BaseBdev3 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:05.263 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.264 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:05.264 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.264 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.264 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.264 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.523 [ 00:14:05.523 { 00:14:05.523 "name": "BaseBdev3", 00:14:05.523 "aliases": [ 00:14:05.523 "350dbddb-702e-4a34-8986-4a8d1def8942" 00:14:05.523 ], 00:14:05.523 "product_name": "Malloc disk", 00:14:05.523 "block_size": 512, 00:14:05.523 "num_blocks": 65536, 00:14:05.523 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:05.523 "assigned_rate_limits": { 00:14:05.523 "rw_ios_per_sec": 0, 00:14:05.523 "rw_mbytes_per_sec": 0, 00:14:05.523 "r_mbytes_per_sec": 0, 00:14:05.523 "w_mbytes_per_sec": 0 00:14:05.523 }, 00:14:05.523 "claimed": false, 00:14:05.523 "zoned": false, 00:14:05.523 "supported_io_types": { 00:14:05.523 "read": true, 00:14:05.523 "write": true, 00:14:05.523 "unmap": true, 00:14:05.523 "flush": true, 00:14:05.523 "reset": true, 00:14:05.523 "nvme_admin": false, 00:14:05.523 "nvme_io": false, 00:14:05.523 "nvme_io_md": false, 00:14:05.523 "write_zeroes": true, 00:14:05.523 "zcopy": true, 00:14:05.523 "get_zone_info": false, 00:14:05.523 "zone_management": false, 00:14:05.523 "zone_append": false, 00:14:05.523 "compare": false, 00:14:05.523 "compare_and_write": false, 00:14:05.523 "abort": true, 00:14:05.523 "seek_hole": false, 00:14:05.523 "seek_data": false, 00:14:05.523 "copy": true, 00:14:05.523 "nvme_iov_md": false 00:14:05.523 }, 00:14:05.523 "memory_domains": [ 00:14:05.523 { 00:14:05.523 "dma_device_id": "system", 00:14:05.523 "dma_device_type": 1 00:14:05.523 }, 00:14:05.523 { 00:14:05.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.523 "dma_device_type": 2 00:14:05.523 } 00:14:05.523 ], 00:14:05.523 "driver_specific": {} 00:14:05.523 } 00:14:05.523 ] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.523 BaseBdev4 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.523 [ 00:14:05.523 { 00:14:05.523 "name": "BaseBdev4", 00:14:05.523 "aliases": [ 00:14:05.523 "6a97e2d8-28f2-4ece-90ef-ee580646b0dd" 00:14:05.523 ], 00:14:05.523 "product_name": "Malloc disk", 00:14:05.523 "block_size": 512, 00:14:05.523 "num_blocks": 65536, 00:14:05.523 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:05.523 "assigned_rate_limits": { 00:14:05.523 "rw_ios_per_sec": 0, 00:14:05.523 "rw_mbytes_per_sec": 0, 00:14:05.523 "r_mbytes_per_sec": 0, 00:14:05.523 "w_mbytes_per_sec": 0 00:14:05.523 }, 00:14:05.523 "claimed": false, 00:14:05.523 "zoned": false, 00:14:05.523 "supported_io_types": { 00:14:05.523 "read": true, 00:14:05.523 "write": true, 00:14:05.523 "unmap": true, 00:14:05.523 "flush": true, 00:14:05.523 "reset": true, 00:14:05.523 "nvme_admin": false, 00:14:05.523 "nvme_io": false, 00:14:05.523 "nvme_io_md": false, 00:14:05.523 "write_zeroes": true, 00:14:05.523 "zcopy": true, 00:14:05.523 "get_zone_info": false, 00:14:05.523 "zone_management": false, 00:14:05.523 "zone_append": false, 00:14:05.523 "compare": false, 00:14:05.523 "compare_and_write": false, 00:14:05.523 "abort": true, 00:14:05.523 "seek_hole": false, 00:14:05.523 "seek_data": false, 00:14:05.523 "copy": true, 00:14:05.523 "nvme_iov_md": false 00:14:05.523 }, 00:14:05.523 "memory_domains": [ 00:14:05.523 { 00:14:05.523 "dma_device_id": "system", 00:14:05.523 "dma_device_type": 1 00:14:05.523 }, 00:14:05.523 { 00:14:05.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.523 "dma_device_type": 2 00:14:05.523 } 00:14:05.523 ], 00:14:05.523 "driver_specific": {} 00:14:05.523 } 00:14:05.523 ] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.523 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.523 [2024-11-20 11:26:13.213280] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.523 [2024-11-20 11:26:13.213473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.523 [2024-11-20 11:26:13.213705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.524 [2024-11-20 11:26:13.216501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.524 [2024-11-20 11:26:13.216718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.524 "name": "Existed_Raid", 00:14:05.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.524 "strip_size_kb": 64, 00:14:05.524 "state": "configuring", 00:14:05.524 "raid_level": "raid0", 00:14:05.524 "superblock": false, 00:14:05.524 "num_base_bdevs": 4, 00:14:05.524 "num_base_bdevs_discovered": 3, 00:14:05.524 "num_base_bdevs_operational": 4, 00:14:05.524 "base_bdevs_list": [ 00:14:05.524 { 00:14:05.524 "name": "BaseBdev1", 00:14:05.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.524 "is_configured": false, 00:14:05.524 "data_offset": 0, 00:14:05.524 "data_size": 0 00:14:05.524 }, 00:14:05.524 { 00:14:05.524 "name": "BaseBdev2", 00:14:05.524 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:05.524 "is_configured": true, 00:14:05.524 "data_offset": 0, 00:14:05.524 "data_size": 65536 00:14:05.524 }, 00:14:05.524 { 00:14:05.524 "name": "BaseBdev3", 00:14:05.524 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:05.524 "is_configured": true, 00:14:05.524 "data_offset": 0, 00:14:05.524 "data_size": 65536 00:14:05.524 }, 00:14:05.524 { 00:14:05.524 "name": "BaseBdev4", 00:14:05.524 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:05.524 "is_configured": true, 00:14:05.524 "data_offset": 0, 00:14:05.524 "data_size": 65536 00:14:05.524 } 00:14:05.524 ] 00:14:05.524 }' 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.524 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.092 [2024-11-20 11:26:13.753431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.092 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.092 "name": "Existed_Raid", 00:14:06.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.092 "strip_size_kb": 64, 00:14:06.092 "state": "configuring", 00:14:06.092 "raid_level": "raid0", 00:14:06.092 "superblock": false, 00:14:06.092 "num_base_bdevs": 4, 00:14:06.092 "num_base_bdevs_discovered": 2, 00:14:06.092 "num_base_bdevs_operational": 4, 00:14:06.092 "base_bdevs_list": [ 00:14:06.092 { 00:14:06.092 "name": "BaseBdev1", 00:14:06.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.092 "is_configured": false, 00:14:06.092 "data_offset": 0, 00:14:06.092 "data_size": 0 00:14:06.092 }, 00:14:06.092 { 00:14:06.092 "name": null, 00:14:06.092 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:06.092 "is_configured": false, 00:14:06.092 "data_offset": 0, 00:14:06.092 "data_size": 65536 00:14:06.092 }, 00:14:06.092 { 00:14:06.092 "name": "BaseBdev3", 00:14:06.092 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:06.093 "is_configured": true, 00:14:06.093 "data_offset": 0, 00:14:06.093 "data_size": 65536 00:14:06.093 }, 00:14:06.093 { 00:14:06.093 "name": "BaseBdev4", 00:14:06.093 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:06.093 "is_configured": true, 00:14:06.093 "data_offset": 0, 00:14:06.093 "data_size": 65536 00:14:06.093 } 00:14:06.093 ] 00:14:06.093 }' 00:14:06.093 11:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.093 11:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.660 [2024-11-20 11:26:14.400964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.660 BaseBdev1 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.660 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.660 [ 00:14:06.660 { 00:14:06.660 "name": "BaseBdev1", 00:14:06.660 "aliases": [ 00:14:06.660 "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335" 00:14:06.660 ], 00:14:06.660 "product_name": "Malloc disk", 00:14:06.660 "block_size": 512, 00:14:06.660 "num_blocks": 65536, 00:14:06.660 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:06.660 "assigned_rate_limits": { 00:14:06.660 "rw_ios_per_sec": 0, 00:14:06.660 "rw_mbytes_per_sec": 0, 00:14:06.660 "r_mbytes_per_sec": 0, 00:14:06.660 "w_mbytes_per_sec": 0 00:14:06.660 }, 00:14:06.660 "claimed": true, 00:14:06.660 "claim_type": "exclusive_write", 00:14:06.660 "zoned": false, 00:14:06.660 "supported_io_types": { 00:14:06.660 "read": true, 00:14:06.660 "write": true, 00:14:06.660 "unmap": true, 00:14:06.660 "flush": true, 00:14:06.660 "reset": true, 00:14:06.660 "nvme_admin": false, 00:14:06.660 "nvme_io": false, 00:14:06.660 "nvme_io_md": false, 00:14:06.660 "write_zeroes": true, 00:14:06.660 "zcopy": true, 00:14:06.660 "get_zone_info": false, 00:14:06.660 "zone_management": false, 00:14:06.660 "zone_append": false, 00:14:06.660 "compare": false, 00:14:06.660 "compare_and_write": false, 00:14:06.660 "abort": true, 00:14:06.660 "seek_hole": false, 00:14:06.660 "seek_data": false, 00:14:06.660 "copy": true, 00:14:06.660 "nvme_iov_md": false 00:14:06.660 }, 00:14:06.660 "memory_domains": [ 00:14:06.660 { 00:14:06.660 "dma_device_id": "system", 00:14:06.660 "dma_device_type": 1 00:14:06.660 }, 00:14:06.660 { 00:14:06.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.660 "dma_device_type": 2 00:14:06.660 } 00:14:06.660 ], 00:14:06.660 "driver_specific": {} 00:14:06.660 } 00:14:06.660 ] 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.661 "name": "Existed_Raid", 00:14:06.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.661 "strip_size_kb": 64, 00:14:06.661 "state": "configuring", 00:14:06.661 "raid_level": "raid0", 00:14:06.661 "superblock": false, 00:14:06.661 "num_base_bdevs": 4, 00:14:06.661 "num_base_bdevs_discovered": 3, 00:14:06.661 "num_base_bdevs_operational": 4, 00:14:06.661 "base_bdevs_list": [ 00:14:06.661 { 00:14:06.661 "name": "BaseBdev1", 00:14:06.661 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:06.661 "is_configured": true, 00:14:06.661 "data_offset": 0, 00:14:06.661 "data_size": 65536 00:14:06.661 }, 00:14:06.661 { 00:14:06.661 "name": null, 00:14:06.661 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:06.661 "is_configured": false, 00:14:06.661 "data_offset": 0, 00:14:06.661 "data_size": 65536 00:14:06.661 }, 00:14:06.661 { 00:14:06.661 "name": "BaseBdev3", 00:14:06.661 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:06.661 "is_configured": true, 00:14:06.661 "data_offset": 0, 00:14:06.661 "data_size": 65536 00:14:06.661 }, 00:14:06.661 { 00:14:06.661 "name": "BaseBdev4", 00:14:06.661 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:06.661 "is_configured": true, 00:14:06.661 "data_offset": 0, 00:14:06.661 "data_size": 65536 00:14:06.661 } 00:14:06.661 ] 00:14:06.661 }' 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.661 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.227 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.227 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.227 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 11:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:07.228 11:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 [2024-11-20 11:26:15.029247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.228 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.486 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.486 "name": "Existed_Raid", 00:14:07.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.486 "strip_size_kb": 64, 00:14:07.486 "state": "configuring", 00:14:07.486 "raid_level": "raid0", 00:14:07.486 "superblock": false, 00:14:07.486 "num_base_bdevs": 4, 00:14:07.486 "num_base_bdevs_discovered": 2, 00:14:07.486 "num_base_bdevs_operational": 4, 00:14:07.486 "base_bdevs_list": [ 00:14:07.486 { 00:14:07.486 "name": "BaseBdev1", 00:14:07.486 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:07.486 "is_configured": true, 00:14:07.486 "data_offset": 0, 00:14:07.486 "data_size": 65536 00:14:07.486 }, 00:14:07.486 { 00:14:07.486 "name": null, 00:14:07.486 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:07.486 "is_configured": false, 00:14:07.486 "data_offset": 0, 00:14:07.486 "data_size": 65536 00:14:07.486 }, 00:14:07.486 { 00:14:07.486 "name": null, 00:14:07.486 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:07.486 "is_configured": false, 00:14:07.486 "data_offset": 0, 00:14:07.486 "data_size": 65536 00:14:07.486 }, 00:14:07.486 { 00:14:07.486 "name": "BaseBdev4", 00:14:07.486 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:07.486 "is_configured": true, 00:14:07.486 "data_offset": 0, 00:14:07.486 "data_size": 65536 00:14:07.486 } 00:14:07.486 ] 00:14:07.486 }' 00:14:07.486 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.486 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.745 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:07.745 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.745 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.745 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.745 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.003 [2024-11-20 11:26:15.625390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.003 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.003 "name": "Existed_Raid", 00:14:08.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.003 "strip_size_kb": 64, 00:14:08.003 "state": "configuring", 00:14:08.003 "raid_level": "raid0", 00:14:08.003 "superblock": false, 00:14:08.003 "num_base_bdevs": 4, 00:14:08.003 "num_base_bdevs_discovered": 3, 00:14:08.003 "num_base_bdevs_operational": 4, 00:14:08.003 "base_bdevs_list": [ 00:14:08.003 { 00:14:08.003 "name": "BaseBdev1", 00:14:08.003 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:08.003 "is_configured": true, 00:14:08.003 "data_offset": 0, 00:14:08.003 "data_size": 65536 00:14:08.003 }, 00:14:08.003 { 00:14:08.003 "name": null, 00:14:08.003 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:08.003 "is_configured": false, 00:14:08.003 "data_offset": 0, 00:14:08.003 "data_size": 65536 00:14:08.003 }, 00:14:08.003 { 00:14:08.003 "name": "BaseBdev3", 00:14:08.003 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:08.003 "is_configured": true, 00:14:08.003 "data_offset": 0, 00:14:08.004 "data_size": 65536 00:14:08.004 }, 00:14:08.004 { 00:14:08.004 "name": "BaseBdev4", 00:14:08.004 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:08.004 "is_configured": true, 00:14:08.004 "data_offset": 0, 00:14:08.004 "data_size": 65536 00:14:08.004 } 00:14:08.004 ] 00:14:08.004 }' 00:14:08.004 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.004 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.572 [2024-11-20 11:26:16.193575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.572 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.572 "name": "Existed_Raid", 00:14:08.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.572 "strip_size_kb": 64, 00:14:08.572 "state": "configuring", 00:14:08.572 "raid_level": "raid0", 00:14:08.572 "superblock": false, 00:14:08.572 "num_base_bdevs": 4, 00:14:08.572 "num_base_bdevs_discovered": 2, 00:14:08.572 "num_base_bdevs_operational": 4, 00:14:08.572 "base_bdevs_list": [ 00:14:08.572 { 00:14:08.572 "name": null, 00:14:08.572 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:08.572 "is_configured": false, 00:14:08.572 "data_offset": 0, 00:14:08.572 "data_size": 65536 00:14:08.572 }, 00:14:08.572 { 00:14:08.572 "name": null, 00:14:08.572 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:08.572 "is_configured": false, 00:14:08.572 "data_offset": 0, 00:14:08.572 "data_size": 65536 00:14:08.572 }, 00:14:08.572 { 00:14:08.572 "name": "BaseBdev3", 00:14:08.572 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:08.572 "is_configured": true, 00:14:08.572 "data_offset": 0, 00:14:08.572 "data_size": 65536 00:14:08.572 }, 00:14:08.572 { 00:14:08.572 "name": "BaseBdev4", 00:14:08.572 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:08.572 "is_configured": true, 00:14:08.572 "data_offset": 0, 00:14:08.572 "data_size": 65536 00:14:08.573 } 00:14:08.573 ] 00:14:08.573 }' 00:14:08.573 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.573 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.140 [2024-11-20 11:26:16.880486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.140 "name": "Existed_Raid", 00:14:09.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.140 "strip_size_kb": 64, 00:14:09.140 "state": "configuring", 00:14:09.140 "raid_level": "raid0", 00:14:09.140 "superblock": false, 00:14:09.140 "num_base_bdevs": 4, 00:14:09.140 "num_base_bdevs_discovered": 3, 00:14:09.140 "num_base_bdevs_operational": 4, 00:14:09.140 "base_bdevs_list": [ 00:14:09.140 { 00:14:09.140 "name": null, 00:14:09.140 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:09.140 "is_configured": false, 00:14:09.140 "data_offset": 0, 00:14:09.140 "data_size": 65536 00:14:09.140 }, 00:14:09.140 { 00:14:09.140 "name": "BaseBdev2", 00:14:09.140 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:09.140 "is_configured": true, 00:14:09.140 "data_offset": 0, 00:14:09.140 "data_size": 65536 00:14:09.140 }, 00:14:09.140 { 00:14:09.140 "name": "BaseBdev3", 00:14:09.140 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:09.140 "is_configured": true, 00:14:09.140 "data_offset": 0, 00:14:09.140 "data_size": 65536 00:14:09.140 }, 00:14:09.140 { 00:14:09.140 "name": "BaseBdev4", 00:14:09.140 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:09.140 "is_configured": true, 00:14:09.140 "data_offset": 0, 00:14:09.140 "data_size": 65536 00:14:09.140 } 00:14:09.140 ] 00:14:09.140 }' 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.140 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c4d0ac56-8dcf-4b22-aee2-9fabc94f4335 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.708 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 [2024-11-20 11:26:17.578650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:09.968 [2024-11-20 11:26:17.578720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:09.968 [2024-11-20 11:26:17.578732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:09.968 [2024-11-20 11:26:17.579078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:09.968 [2024-11-20 11:26:17.579275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:09.968 [2024-11-20 11:26:17.579297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:09.968 [2024-11-20 11:26:17.579652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.968 NewBaseBdev 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 [ 00:14:09.968 { 00:14:09.968 "name": "NewBaseBdev", 00:14:09.968 "aliases": [ 00:14:09.968 "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335" 00:14:09.968 ], 00:14:09.968 "product_name": "Malloc disk", 00:14:09.968 "block_size": 512, 00:14:09.968 "num_blocks": 65536, 00:14:09.968 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:09.968 "assigned_rate_limits": { 00:14:09.968 "rw_ios_per_sec": 0, 00:14:09.968 "rw_mbytes_per_sec": 0, 00:14:09.968 "r_mbytes_per_sec": 0, 00:14:09.968 "w_mbytes_per_sec": 0 00:14:09.968 }, 00:14:09.968 "claimed": true, 00:14:09.968 "claim_type": "exclusive_write", 00:14:09.968 "zoned": false, 00:14:09.968 "supported_io_types": { 00:14:09.968 "read": true, 00:14:09.968 "write": true, 00:14:09.968 "unmap": true, 00:14:09.968 "flush": true, 00:14:09.968 "reset": true, 00:14:09.968 "nvme_admin": false, 00:14:09.968 "nvme_io": false, 00:14:09.968 "nvme_io_md": false, 00:14:09.968 "write_zeroes": true, 00:14:09.968 "zcopy": true, 00:14:09.968 "get_zone_info": false, 00:14:09.968 "zone_management": false, 00:14:09.968 "zone_append": false, 00:14:09.968 "compare": false, 00:14:09.968 "compare_and_write": false, 00:14:09.968 "abort": true, 00:14:09.968 "seek_hole": false, 00:14:09.968 "seek_data": false, 00:14:09.968 "copy": true, 00:14:09.968 "nvme_iov_md": false 00:14:09.968 }, 00:14:09.968 "memory_domains": [ 00:14:09.968 { 00:14:09.968 "dma_device_id": "system", 00:14:09.968 "dma_device_type": 1 00:14:09.968 }, 00:14:09.968 { 00:14:09.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.968 "dma_device_type": 2 00:14:09.968 } 00:14:09.968 ], 00:14:09.968 "driver_specific": {} 00:14:09.968 } 00:14:09.968 ] 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.968 "name": "Existed_Raid", 00:14:09.968 "uuid": "b8b42b1b-c7e7-47e6-b93d-db0254922bbd", 00:14:09.968 "strip_size_kb": 64, 00:14:09.968 "state": "online", 00:14:09.968 "raid_level": "raid0", 00:14:09.968 "superblock": false, 00:14:09.968 "num_base_bdevs": 4, 00:14:09.968 "num_base_bdevs_discovered": 4, 00:14:09.968 "num_base_bdevs_operational": 4, 00:14:09.968 "base_bdevs_list": [ 00:14:09.968 { 00:14:09.968 "name": "NewBaseBdev", 00:14:09.968 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:09.968 "is_configured": true, 00:14:09.968 "data_offset": 0, 00:14:09.968 "data_size": 65536 00:14:09.968 }, 00:14:09.968 { 00:14:09.968 "name": "BaseBdev2", 00:14:09.968 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:09.968 "is_configured": true, 00:14:09.968 "data_offset": 0, 00:14:09.968 "data_size": 65536 00:14:09.968 }, 00:14:09.968 { 00:14:09.968 "name": "BaseBdev3", 00:14:09.968 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:09.968 "is_configured": true, 00:14:09.968 "data_offset": 0, 00:14:09.968 "data_size": 65536 00:14:09.968 }, 00:14:09.968 { 00:14:09.968 "name": "BaseBdev4", 00:14:09.968 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:09.968 "is_configured": true, 00:14:09.968 "data_offset": 0, 00:14:09.968 "data_size": 65536 00:14:09.968 } 00:14:09.968 ] 00:14:09.968 }' 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.968 11:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.536 [2024-11-20 11:26:18.143393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.536 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:10.536 "name": "Existed_Raid", 00:14:10.536 "aliases": [ 00:14:10.536 "b8b42b1b-c7e7-47e6-b93d-db0254922bbd" 00:14:10.536 ], 00:14:10.536 "product_name": "Raid Volume", 00:14:10.536 "block_size": 512, 00:14:10.536 "num_blocks": 262144, 00:14:10.536 "uuid": "b8b42b1b-c7e7-47e6-b93d-db0254922bbd", 00:14:10.536 "assigned_rate_limits": { 00:14:10.537 "rw_ios_per_sec": 0, 00:14:10.537 "rw_mbytes_per_sec": 0, 00:14:10.537 "r_mbytes_per_sec": 0, 00:14:10.537 "w_mbytes_per_sec": 0 00:14:10.537 }, 00:14:10.537 "claimed": false, 00:14:10.537 "zoned": false, 00:14:10.537 "supported_io_types": { 00:14:10.537 "read": true, 00:14:10.537 "write": true, 00:14:10.537 "unmap": true, 00:14:10.537 "flush": true, 00:14:10.537 "reset": true, 00:14:10.537 "nvme_admin": false, 00:14:10.537 "nvme_io": false, 00:14:10.537 "nvme_io_md": false, 00:14:10.537 "write_zeroes": true, 00:14:10.537 "zcopy": false, 00:14:10.537 "get_zone_info": false, 00:14:10.537 "zone_management": false, 00:14:10.537 "zone_append": false, 00:14:10.537 "compare": false, 00:14:10.537 "compare_and_write": false, 00:14:10.537 "abort": false, 00:14:10.537 "seek_hole": false, 00:14:10.537 "seek_data": false, 00:14:10.537 "copy": false, 00:14:10.537 "nvme_iov_md": false 00:14:10.537 }, 00:14:10.537 "memory_domains": [ 00:14:10.537 { 00:14:10.537 "dma_device_id": "system", 00:14:10.537 "dma_device_type": 1 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.537 "dma_device_type": 2 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "dma_device_id": "system", 00:14:10.537 "dma_device_type": 1 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.537 "dma_device_type": 2 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "dma_device_id": "system", 00:14:10.537 "dma_device_type": 1 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.537 "dma_device_type": 2 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "dma_device_id": "system", 00:14:10.537 "dma_device_type": 1 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.537 "dma_device_type": 2 00:14:10.537 } 00:14:10.537 ], 00:14:10.537 "driver_specific": { 00:14:10.537 "raid": { 00:14:10.537 "uuid": "b8b42b1b-c7e7-47e6-b93d-db0254922bbd", 00:14:10.537 "strip_size_kb": 64, 00:14:10.537 "state": "online", 00:14:10.537 "raid_level": "raid0", 00:14:10.537 "superblock": false, 00:14:10.537 "num_base_bdevs": 4, 00:14:10.537 "num_base_bdevs_discovered": 4, 00:14:10.537 "num_base_bdevs_operational": 4, 00:14:10.537 "base_bdevs_list": [ 00:14:10.537 { 00:14:10.537 "name": "NewBaseBdev", 00:14:10.537 "uuid": "c4d0ac56-8dcf-4b22-aee2-9fabc94f4335", 00:14:10.537 "is_configured": true, 00:14:10.537 "data_offset": 0, 00:14:10.537 "data_size": 65536 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "name": "BaseBdev2", 00:14:10.537 "uuid": "b67140ed-5ad3-4183-b173-b48fcf33117c", 00:14:10.537 "is_configured": true, 00:14:10.537 "data_offset": 0, 00:14:10.537 "data_size": 65536 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "name": "BaseBdev3", 00:14:10.537 "uuid": "350dbddb-702e-4a34-8986-4a8d1def8942", 00:14:10.537 "is_configured": true, 00:14:10.537 "data_offset": 0, 00:14:10.537 "data_size": 65536 00:14:10.537 }, 00:14:10.537 { 00:14:10.537 "name": "BaseBdev4", 00:14:10.537 "uuid": "6a97e2d8-28f2-4ece-90ef-ee580646b0dd", 00:14:10.537 "is_configured": true, 00:14:10.537 "data_offset": 0, 00:14:10.537 "data_size": 65536 00:14:10.537 } 00:14:10.537 ] 00:14:10.537 } 00:14:10.537 } 00:14:10.537 }' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:10.537 BaseBdev2 00:14:10.537 BaseBdev3 00:14:10.537 BaseBdev4' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.537 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.797 [2024-11-20 11:26:18.523084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.797 [2024-11-20 11:26:18.523122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.797 [2024-11-20 11:26:18.523260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.797 [2024-11-20 11:26:18.523345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.797 [2024-11-20 11:26:18.523361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69384 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69384 ']' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69384 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69384 00:14:10.797 killing process with pid 69384 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69384' 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69384 00:14:10.797 [2024-11-20 11:26:18.560916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.797 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69384 00:14:11.365 [2024-11-20 11:26:18.916729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.314 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:12.314 00:14:12.314 real 0m12.973s 00:14:12.314 user 0m21.529s 00:14:12.314 sys 0m1.766s 00:14:12.314 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.314 ************************************ 00:14:12.314 END TEST raid_state_function_test 00:14:12.314 ************************************ 00:14:12.314 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.314 11:26:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:14:12.314 11:26:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:12.314 11:26:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.314 11:26:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.314 ************************************ 00:14:12.314 START TEST raid_state_function_test_sb 00:14:12.314 ************************************ 00:14:12.314 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:14:12.314 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:12.314 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:12.314 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:12.314 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:12.314 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:12.314 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:12.315 Process raid pid: 70067 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70067 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70067' 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70067 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70067 ']' 00:14:12.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.315 11:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.315 [2024-11-20 11:26:20.124785] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:14:12.315 [2024-11-20 11:26:20.124968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.574 [2024-11-20 11:26:20.305094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.833 [2024-11-20 11:26:20.442146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.833 [2024-11-20 11:26:20.655274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.833 [2024-11-20 11:26:20.655325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.401 [2024-11-20 11:26:21.144223] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.401 [2024-11-20 11:26:21.144287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.401 [2024-11-20 11:26:21.144306] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.401 [2024-11-20 11:26:21.144323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.401 [2024-11-20 11:26:21.144333] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:13.401 [2024-11-20 11:26:21.144347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:13.401 [2024-11-20 11:26:21.144357] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:13.401 [2024-11-20 11:26:21.144371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.401 "name": "Existed_Raid", 00:14:13.401 "uuid": "5d67d717-6dcb-4962-9efe-50b0d28bedbf", 00:14:13.401 "strip_size_kb": 64, 00:14:13.401 "state": "configuring", 00:14:13.401 "raid_level": "raid0", 00:14:13.401 "superblock": true, 00:14:13.401 "num_base_bdevs": 4, 00:14:13.401 "num_base_bdevs_discovered": 0, 00:14:13.401 "num_base_bdevs_operational": 4, 00:14:13.401 "base_bdevs_list": [ 00:14:13.401 { 00:14:13.401 "name": "BaseBdev1", 00:14:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.401 "is_configured": false, 00:14:13.401 "data_offset": 0, 00:14:13.401 "data_size": 0 00:14:13.401 }, 00:14:13.401 { 00:14:13.401 "name": "BaseBdev2", 00:14:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.401 "is_configured": false, 00:14:13.401 "data_offset": 0, 00:14:13.401 "data_size": 0 00:14:13.401 }, 00:14:13.401 { 00:14:13.401 "name": "BaseBdev3", 00:14:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.401 "is_configured": false, 00:14:13.401 "data_offset": 0, 00:14:13.401 "data_size": 0 00:14:13.401 }, 00:14:13.401 { 00:14:13.401 "name": "BaseBdev4", 00:14:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.401 "is_configured": false, 00:14:13.401 "data_offset": 0, 00:14:13.401 "data_size": 0 00:14:13.401 } 00:14:13.401 ] 00:14:13.401 }' 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.401 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.970 [2024-11-20 11:26:21.680386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.970 [2024-11-20 11:26:21.680610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.970 [2024-11-20 11:26:21.688393] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.970 [2024-11-20 11:26:21.688486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.970 [2024-11-20 11:26:21.688503] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.970 [2024-11-20 11:26:21.688550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.970 [2024-11-20 11:26:21.688560] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:13.970 [2024-11-20 11:26:21.688574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:13.970 [2024-11-20 11:26:21.688583] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:13.970 [2024-11-20 11:26:21.688598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.970 [2024-11-20 11:26:21.734191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.970 BaseBdev1 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.970 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.971 [ 00:14:13.971 { 00:14:13.971 "name": "BaseBdev1", 00:14:13.971 "aliases": [ 00:14:13.971 "ad45c52c-8b81-4e46-b593-77762a9bd8cd" 00:14:13.971 ], 00:14:13.971 "product_name": "Malloc disk", 00:14:13.971 "block_size": 512, 00:14:13.971 "num_blocks": 65536, 00:14:13.971 "uuid": "ad45c52c-8b81-4e46-b593-77762a9bd8cd", 00:14:13.971 "assigned_rate_limits": { 00:14:13.971 "rw_ios_per_sec": 0, 00:14:13.971 "rw_mbytes_per_sec": 0, 00:14:13.971 "r_mbytes_per_sec": 0, 00:14:13.971 "w_mbytes_per_sec": 0 00:14:13.971 }, 00:14:13.971 "claimed": true, 00:14:13.971 "claim_type": "exclusive_write", 00:14:13.971 "zoned": false, 00:14:13.971 "supported_io_types": { 00:14:13.971 "read": true, 00:14:13.971 "write": true, 00:14:13.971 "unmap": true, 00:14:13.971 "flush": true, 00:14:13.971 "reset": true, 00:14:13.971 "nvme_admin": false, 00:14:13.971 "nvme_io": false, 00:14:13.971 "nvme_io_md": false, 00:14:13.971 "write_zeroes": true, 00:14:13.971 "zcopy": true, 00:14:13.971 "get_zone_info": false, 00:14:13.971 "zone_management": false, 00:14:13.971 "zone_append": false, 00:14:13.971 "compare": false, 00:14:13.971 "compare_and_write": false, 00:14:13.971 "abort": true, 00:14:13.971 "seek_hole": false, 00:14:13.971 "seek_data": false, 00:14:13.971 "copy": true, 00:14:13.971 "nvme_iov_md": false 00:14:13.971 }, 00:14:13.971 "memory_domains": [ 00:14:13.971 { 00:14:13.971 "dma_device_id": "system", 00:14:13.971 "dma_device_type": 1 00:14:13.971 }, 00:14:13.971 { 00:14:13.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.971 "dma_device_type": 2 00:14:13.971 } 00:14:13.971 ], 00:14:13.971 "driver_specific": {} 00:14:13.971 } 00:14:13.971 ] 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.971 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.230 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.230 "name": "Existed_Raid", 00:14:14.230 "uuid": "612bdea9-0741-4ad2-93cf-0eb523eae875", 00:14:14.230 "strip_size_kb": 64, 00:14:14.230 "state": "configuring", 00:14:14.230 "raid_level": "raid0", 00:14:14.230 "superblock": true, 00:14:14.230 "num_base_bdevs": 4, 00:14:14.230 "num_base_bdevs_discovered": 1, 00:14:14.230 "num_base_bdevs_operational": 4, 00:14:14.230 "base_bdevs_list": [ 00:14:14.230 { 00:14:14.230 "name": "BaseBdev1", 00:14:14.230 "uuid": "ad45c52c-8b81-4e46-b593-77762a9bd8cd", 00:14:14.230 "is_configured": true, 00:14:14.230 "data_offset": 2048, 00:14:14.230 "data_size": 63488 00:14:14.230 }, 00:14:14.230 { 00:14:14.230 "name": "BaseBdev2", 00:14:14.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.230 "is_configured": false, 00:14:14.230 "data_offset": 0, 00:14:14.230 "data_size": 0 00:14:14.230 }, 00:14:14.230 { 00:14:14.230 "name": "BaseBdev3", 00:14:14.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.230 "is_configured": false, 00:14:14.230 "data_offset": 0, 00:14:14.230 "data_size": 0 00:14:14.230 }, 00:14:14.230 { 00:14:14.230 "name": "BaseBdev4", 00:14:14.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.230 "is_configured": false, 00:14:14.230 "data_offset": 0, 00:14:14.230 "data_size": 0 00:14:14.230 } 00:14:14.230 ] 00:14:14.230 }' 00:14:14.230 11:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.230 11:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 [2024-11-20 11:26:22.314467] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.489 [2024-11-20 11:26:22.314527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.489 [2024-11-20 11:26:22.326490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.489 [2024-11-20 11:26:22.329155] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.489 [2024-11-20 11:26:22.329324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.489 [2024-11-20 11:26:22.329454] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.489 [2024-11-20 11:26:22.329516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.489 [2024-11-20 11:26:22.329706] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:14.489 [2024-11-20 11:26:22.329765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:14.489 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.748 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.748 "name": "Existed_Raid", 00:14:14.748 "uuid": "ec029c99-b352-4f71-8b0e-ec8e497bcd81", 00:14:14.748 "strip_size_kb": 64, 00:14:14.748 "state": "configuring", 00:14:14.748 "raid_level": "raid0", 00:14:14.748 "superblock": true, 00:14:14.748 "num_base_bdevs": 4, 00:14:14.748 "num_base_bdevs_discovered": 1, 00:14:14.748 "num_base_bdevs_operational": 4, 00:14:14.748 "base_bdevs_list": [ 00:14:14.748 { 00:14:14.748 "name": "BaseBdev1", 00:14:14.748 "uuid": "ad45c52c-8b81-4e46-b593-77762a9bd8cd", 00:14:14.748 "is_configured": true, 00:14:14.748 "data_offset": 2048, 00:14:14.748 "data_size": 63488 00:14:14.748 }, 00:14:14.748 { 00:14:14.748 "name": "BaseBdev2", 00:14:14.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.748 "is_configured": false, 00:14:14.748 "data_offset": 0, 00:14:14.748 "data_size": 0 00:14:14.748 }, 00:14:14.748 { 00:14:14.748 "name": "BaseBdev3", 00:14:14.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.748 "is_configured": false, 00:14:14.748 "data_offset": 0, 00:14:14.748 "data_size": 0 00:14:14.748 }, 00:14:14.748 { 00:14:14.748 "name": "BaseBdev4", 00:14:14.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.748 "is_configured": false, 00:14:14.748 "data_offset": 0, 00:14:14.748 "data_size": 0 00:14:14.748 } 00:14:14.748 ] 00:14:14.748 }' 00:14:14.749 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.749 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.008 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.008 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.008 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.269 [2024-11-20 11:26:22.878790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.269 BaseBdev2 00:14:15.269 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.269 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:15.269 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.270 [ 00:14:15.270 { 00:14:15.270 "name": "BaseBdev2", 00:14:15.270 "aliases": [ 00:14:15.270 "8e899138-20a2-40c1-8061-3670033ca6fa" 00:14:15.270 ], 00:14:15.270 "product_name": "Malloc disk", 00:14:15.270 "block_size": 512, 00:14:15.270 "num_blocks": 65536, 00:14:15.270 "uuid": "8e899138-20a2-40c1-8061-3670033ca6fa", 00:14:15.270 "assigned_rate_limits": { 00:14:15.270 "rw_ios_per_sec": 0, 00:14:15.270 "rw_mbytes_per_sec": 0, 00:14:15.270 "r_mbytes_per_sec": 0, 00:14:15.270 "w_mbytes_per_sec": 0 00:14:15.270 }, 00:14:15.270 "claimed": true, 00:14:15.270 "claim_type": "exclusive_write", 00:14:15.270 "zoned": false, 00:14:15.270 "supported_io_types": { 00:14:15.270 "read": true, 00:14:15.270 "write": true, 00:14:15.270 "unmap": true, 00:14:15.270 "flush": true, 00:14:15.270 "reset": true, 00:14:15.270 "nvme_admin": false, 00:14:15.270 "nvme_io": false, 00:14:15.270 "nvme_io_md": false, 00:14:15.270 "write_zeroes": true, 00:14:15.270 "zcopy": true, 00:14:15.270 "get_zone_info": false, 00:14:15.270 "zone_management": false, 00:14:15.270 "zone_append": false, 00:14:15.270 "compare": false, 00:14:15.270 "compare_and_write": false, 00:14:15.270 "abort": true, 00:14:15.270 "seek_hole": false, 00:14:15.270 "seek_data": false, 00:14:15.270 "copy": true, 00:14:15.270 "nvme_iov_md": false 00:14:15.270 }, 00:14:15.270 "memory_domains": [ 00:14:15.270 { 00:14:15.270 "dma_device_id": "system", 00:14:15.270 "dma_device_type": 1 00:14:15.270 }, 00:14:15.270 { 00:14:15.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.270 "dma_device_type": 2 00:14:15.270 } 00:14:15.270 ], 00:14:15.270 "driver_specific": {} 00:14:15.270 } 00:14:15.270 ] 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.270 "name": "Existed_Raid", 00:14:15.270 "uuid": "ec029c99-b352-4f71-8b0e-ec8e497bcd81", 00:14:15.270 "strip_size_kb": 64, 00:14:15.270 "state": "configuring", 00:14:15.270 "raid_level": "raid0", 00:14:15.270 "superblock": true, 00:14:15.270 "num_base_bdevs": 4, 00:14:15.270 "num_base_bdevs_discovered": 2, 00:14:15.270 "num_base_bdevs_operational": 4, 00:14:15.270 "base_bdevs_list": [ 00:14:15.270 { 00:14:15.270 "name": "BaseBdev1", 00:14:15.270 "uuid": "ad45c52c-8b81-4e46-b593-77762a9bd8cd", 00:14:15.270 "is_configured": true, 00:14:15.270 "data_offset": 2048, 00:14:15.270 "data_size": 63488 00:14:15.270 }, 00:14:15.270 { 00:14:15.270 "name": "BaseBdev2", 00:14:15.270 "uuid": "8e899138-20a2-40c1-8061-3670033ca6fa", 00:14:15.270 "is_configured": true, 00:14:15.270 "data_offset": 2048, 00:14:15.270 "data_size": 63488 00:14:15.270 }, 00:14:15.270 { 00:14:15.270 "name": "BaseBdev3", 00:14:15.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.270 "is_configured": false, 00:14:15.270 "data_offset": 0, 00:14:15.270 "data_size": 0 00:14:15.270 }, 00:14:15.270 { 00:14:15.270 "name": "BaseBdev4", 00:14:15.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.270 "is_configured": false, 00:14:15.270 "data_offset": 0, 00:14:15.270 "data_size": 0 00:14:15.270 } 00:14:15.270 ] 00:14:15.270 }' 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.270 11:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.838 [2024-11-20 11:26:23.489503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.838 BaseBdev3 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.838 [ 00:14:15.838 { 00:14:15.838 "name": "BaseBdev3", 00:14:15.838 "aliases": [ 00:14:15.838 "466ed7df-caa9-41d5-8966-8f82c5794704" 00:14:15.838 ], 00:14:15.838 "product_name": "Malloc disk", 00:14:15.838 "block_size": 512, 00:14:15.838 "num_blocks": 65536, 00:14:15.838 "uuid": "466ed7df-caa9-41d5-8966-8f82c5794704", 00:14:15.838 "assigned_rate_limits": { 00:14:15.838 "rw_ios_per_sec": 0, 00:14:15.838 "rw_mbytes_per_sec": 0, 00:14:15.838 "r_mbytes_per_sec": 0, 00:14:15.838 "w_mbytes_per_sec": 0 00:14:15.838 }, 00:14:15.838 "claimed": true, 00:14:15.838 "claim_type": "exclusive_write", 00:14:15.838 "zoned": false, 00:14:15.838 "supported_io_types": { 00:14:15.838 "read": true, 00:14:15.838 "write": true, 00:14:15.838 "unmap": true, 00:14:15.838 "flush": true, 00:14:15.838 "reset": true, 00:14:15.838 "nvme_admin": false, 00:14:15.838 "nvme_io": false, 00:14:15.838 "nvme_io_md": false, 00:14:15.838 "write_zeroes": true, 00:14:15.838 "zcopy": true, 00:14:15.838 "get_zone_info": false, 00:14:15.838 "zone_management": false, 00:14:15.838 "zone_append": false, 00:14:15.838 "compare": false, 00:14:15.838 "compare_and_write": false, 00:14:15.838 "abort": true, 00:14:15.838 "seek_hole": false, 00:14:15.838 "seek_data": false, 00:14:15.838 "copy": true, 00:14:15.838 "nvme_iov_md": false 00:14:15.838 }, 00:14:15.838 "memory_domains": [ 00:14:15.838 { 00:14:15.838 "dma_device_id": "system", 00:14:15.838 "dma_device_type": 1 00:14:15.838 }, 00:14:15.838 { 00:14:15.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.838 "dma_device_type": 2 00:14:15.838 } 00:14:15.838 ], 00:14:15.838 "driver_specific": {} 00:14:15.838 } 00:14:15.838 ] 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.838 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.838 "name": "Existed_Raid", 00:14:15.838 "uuid": "ec029c99-b352-4f71-8b0e-ec8e497bcd81", 00:14:15.838 "strip_size_kb": 64, 00:14:15.838 "state": "configuring", 00:14:15.838 "raid_level": "raid0", 00:14:15.838 "superblock": true, 00:14:15.838 "num_base_bdevs": 4, 00:14:15.838 "num_base_bdevs_discovered": 3, 00:14:15.838 "num_base_bdevs_operational": 4, 00:14:15.838 "base_bdevs_list": [ 00:14:15.838 { 00:14:15.838 "name": "BaseBdev1", 00:14:15.838 "uuid": "ad45c52c-8b81-4e46-b593-77762a9bd8cd", 00:14:15.838 "is_configured": true, 00:14:15.838 "data_offset": 2048, 00:14:15.838 "data_size": 63488 00:14:15.838 }, 00:14:15.838 { 00:14:15.838 "name": "BaseBdev2", 00:14:15.838 "uuid": "8e899138-20a2-40c1-8061-3670033ca6fa", 00:14:15.838 "is_configured": true, 00:14:15.838 "data_offset": 2048, 00:14:15.838 "data_size": 63488 00:14:15.838 }, 00:14:15.838 { 00:14:15.839 "name": "BaseBdev3", 00:14:15.839 "uuid": "466ed7df-caa9-41d5-8966-8f82c5794704", 00:14:15.839 "is_configured": true, 00:14:15.839 "data_offset": 2048, 00:14:15.839 "data_size": 63488 00:14:15.839 }, 00:14:15.839 { 00:14:15.839 "name": "BaseBdev4", 00:14:15.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.839 "is_configured": false, 00:14:15.839 "data_offset": 0, 00:14:15.839 "data_size": 0 00:14:15.839 } 00:14:15.839 ] 00:14:15.839 }' 00:14:15.839 11:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.839 11:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 [2024-11-20 11:26:24.105730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.405 [2024-11-20 11:26:24.106305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:16.405 [2024-11-20 11:26:24.106332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:16.405 BaseBdev4 00:14:16.405 [2024-11-20 11:26:24.106699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:16.405 [2024-11-20 11:26:24.106900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:16.405 [2024-11-20 11:26:24.106922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:16.405 [2024-11-20 11:26:24.107097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 [ 00:14:16.405 { 00:14:16.405 "name": "BaseBdev4", 00:14:16.405 "aliases": [ 00:14:16.405 "78e159a8-3415-4104-94cb-f557be17554f" 00:14:16.405 ], 00:14:16.405 "product_name": "Malloc disk", 00:14:16.405 "block_size": 512, 00:14:16.405 "num_blocks": 65536, 00:14:16.405 "uuid": "78e159a8-3415-4104-94cb-f557be17554f", 00:14:16.405 "assigned_rate_limits": { 00:14:16.405 "rw_ios_per_sec": 0, 00:14:16.405 "rw_mbytes_per_sec": 0, 00:14:16.405 "r_mbytes_per_sec": 0, 00:14:16.405 "w_mbytes_per_sec": 0 00:14:16.405 }, 00:14:16.405 "claimed": true, 00:14:16.405 "claim_type": "exclusive_write", 00:14:16.405 "zoned": false, 00:14:16.405 "supported_io_types": { 00:14:16.405 "read": true, 00:14:16.405 "write": true, 00:14:16.405 "unmap": true, 00:14:16.405 "flush": true, 00:14:16.405 "reset": true, 00:14:16.405 "nvme_admin": false, 00:14:16.405 "nvme_io": false, 00:14:16.405 "nvme_io_md": false, 00:14:16.405 "write_zeroes": true, 00:14:16.405 "zcopy": true, 00:14:16.405 "get_zone_info": false, 00:14:16.405 "zone_management": false, 00:14:16.405 "zone_append": false, 00:14:16.405 "compare": false, 00:14:16.405 "compare_and_write": false, 00:14:16.405 "abort": true, 00:14:16.405 "seek_hole": false, 00:14:16.405 "seek_data": false, 00:14:16.405 "copy": true, 00:14:16.405 "nvme_iov_md": false 00:14:16.405 }, 00:14:16.405 "memory_domains": [ 00:14:16.405 { 00:14:16.405 "dma_device_id": "system", 00:14:16.405 "dma_device_type": 1 00:14:16.405 }, 00:14:16.405 { 00:14:16.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.405 "dma_device_type": 2 00:14:16.405 } 00:14:16.405 ], 00:14:16.405 "driver_specific": {} 00:14:16.405 } 00:14:16.405 ] 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.405 "name": "Existed_Raid", 00:14:16.405 "uuid": "ec029c99-b352-4f71-8b0e-ec8e497bcd81", 00:14:16.405 "strip_size_kb": 64, 00:14:16.405 "state": "online", 00:14:16.405 "raid_level": "raid0", 00:14:16.405 "superblock": true, 00:14:16.405 "num_base_bdevs": 4, 00:14:16.405 "num_base_bdevs_discovered": 4, 00:14:16.405 "num_base_bdevs_operational": 4, 00:14:16.405 "base_bdevs_list": [ 00:14:16.405 { 00:14:16.405 "name": "BaseBdev1", 00:14:16.405 "uuid": "ad45c52c-8b81-4e46-b593-77762a9bd8cd", 00:14:16.405 "is_configured": true, 00:14:16.405 "data_offset": 2048, 00:14:16.405 "data_size": 63488 00:14:16.405 }, 00:14:16.405 { 00:14:16.405 "name": "BaseBdev2", 00:14:16.405 "uuid": "8e899138-20a2-40c1-8061-3670033ca6fa", 00:14:16.405 "is_configured": true, 00:14:16.405 "data_offset": 2048, 00:14:16.405 "data_size": 63488 00:14:16.405 }, 00:14:16.405 { 00:14:16.405 "name": "BaseBdev3", 00:14:16.405 "uuid": "466ed7df-caa9-41d5-8966-8f82c5794704", 00:14:16.405 "is_configured": true, 00:14:16.405 "data_offset": 2048, 00:14:16.405 "data_size": 63488 00:14:16.405 }, 00:14:16.405 { 00:14:16.405 "name": "BaseBdev4", 00:14:16.405 "uuid": "78e159a8-3415-4104-94cb-f557be17554f", 00:14:16.405 "is_configured": true, 00:14:16.405 "data_offset": 2048, 00:14:16.405 "data_size": 63488 00:14:16.405 } 00:14:16.405 ] 00:14:16.405 }' 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.405 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.970 [2024-11-20 11:26:24.682385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.970 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.970 "name": "Existed_Raid", 00:14:16.970 "aliases": [ 00:14:16.970 "ec029c99-b352-4f71-8b0e-ec8e497bcd81" 00:14:16.970 ], 00:14:16.970 "product_name": "Raid Volume", 00:14:16.970 "block_size": 512, 00:14:16.970 "num_blocks": 253952, 00:14:16.970 "uuid": "ec029c99-b352-4f71-8b0e-ec8e497bcd81", 00:14:16.970 "assigned_rate_limits": { 00:14:16.970 "rw_ios_per_sec": 0, 00:14:16.970 "rw_mbytes_per_sec": 0, 00:14:16.970 "r_mbytes_per_sec": 0, 00:14:16.970 "w_mbytes_per_sec": 0 00:14:16.970 }, 00:14:16.970 "claimed": false, 00:14:16.970 "zoned": false, 00:14:16.970 "supported_io_types": { 00:14:16.970 "read": true, 00:14:16.970 "write": true, 00:14:16.970 "unmap": true, 00:14:16.970 "flush": true, 00:14:16.970 "reset": true, 00:14:16.970 "nvme_admin": false, 00:14:16.970 "nvme_io": false, 00:14:16.970 "nvme_io_md": false, 00:14:16.970 "write_zeroes": true, 00:14:16.970 "zcopy": false, 00:14:16.970 "get_zone_info": false, 00:14:16.970 "zone_management": false, 00:14:16.970 "zone_append": false, 00:14:16.970 "compare": false, 00:14:16.970 "compare_and_write": false, 00:14:16.970 "abort": false, 00:14:16.970 "seek_hole": false, 00:14:16.970 "seek_data": false, 00:14:16.970 "copy": false, 00:14:16.970 "nvme_iov_md": false 00:14:16.970 }, 00:14:16.970 "memory_domains": [ 00:14:16.970 { 00:14:16.970 "dma_device_id": "system", 00:14:16.970 "dma_device_type": 1 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.970 "dma_device_type": 2 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "dma_device_id": "system", 00:14:16.970 "dma_device_type": 1 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.970 "dma_device_type": 2 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "dma_device_id": "system", 00:14:16.970 "dma_device_type": 1 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.970 "dma_device_type": 2 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "dma_device_id": "system", 00:14:16.970 "dma_device_type": 1 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.970 "dma_device_type": 2 00:14:16.970 } 00:14:16.970 ], 00:14:16.970 "driver_specific": { 00:14:16.970 "raid": { 00:14:16.970 "uuid": "ec029c99-b352-4f71-8b0e-ec8e497bcd81", 00:14:16.970 "strip_size_kb": 64, 00:14:16.970 "state": "online", 00:14:16.970 "raid_level": "raid0", 00:14:16.970 "superblock": true, 00:14:16.970 "num_base_bdevs": 4, 00:14:16.970 "num_base_bdevs_discovered": 4, 00:14:16.970 "num_base_bdevs_operational": 4, 00:14:16.970 "base_bdevs_list": [ 00:14:16.970 { 00:14:16.970 "name": "BaseBdev1", 00:14:16.970 "uuid": "ad45c52c-8b81-4e46-b593-77762a9bd8cd", 00:14:16.970 "is_configured": true, 00:14:16.970 "data_offset": 2048, 00:14:16.970 "data_size": 63488 00:14:16.970 }, 00:14:16.970 { 00:14:16.970 "name": "BaseBdev2", 00:14:16.970 "uuid": "8e899138-20a2-40c1-8061-3670033ca6fa", 00:14:16.970 "is_configured": true, 00:14:16.970 "data_offset": 2048, 00:14:16.971 "data_size": 63488 00:14:16.971 }, 00:14:16.971 { 00:14:16.971 "name": "BaseBdev3", 00:14:16.971 "uuid": "466ed7df-caa9-41d5-8966-8f82c5794704", 00:14:16.971 "is_configured": true, 00:14:16.971 "data_offset": 2048, 00:14:16.971 "data_size": 63488 00:14:16.971 }, 00:14:16.971 { 00:14:16.971 "name": "BaseBdev4", 00:14:16.971 "uuid": "78e159a8-3415-4104-94cb-f557be17554f", 00:14:16.971 "is_configured": true, 00:14:16.971 "data_offset": 2048, 00:14:16.971 "data_size": 63488 00:14:16.971 } 00:14:16.971 ] 00:14:16.971 } 00:14:16.971 } 00:14:16.971 }' 00:14:16.971 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.971 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:16.971 BaseBdev2 00:14:16.971 BaseBdev3 00:14:16.971 BaseBdev4' 00:14:16.971 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.230 11:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.230 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.230 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.230 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.230 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:17.230 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.230 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.230 [2024-11-20 11:26:25.054148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.230 [2024-11-20 11:26:25.054374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.230 [2024-11-20 11:26:25.054465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.489 "name": "Existed_Raid", 00:14:17.489 "uuid": "ec029c99-b352-4f71-8b0e-ec8e497bcd81", 00:14:17.489 "strip_size_kb": 64, 00:14:17.489 "state": "offline", 00:14:17.489 "raid_level": "raid0", 00:14:17.489 "superblock": true, 00:14:17.489 "num_base_bdevs": 4, 00:14:17.489 "num_base_bdevs_discovered": 3, 00:14:17.489 "num_base_bdevs_operational": 3, 00:14:17.489 "base_bdevs_list": [ 00:14:17.489 { 00:14:17.489 "name": null, 00:14:17.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.489 "is_configured": false, 00:14:17.489 "data_offset": 0, 00:14:17.489 "data_size": 63488 00:14:17.489 }, 00:14:17.489 { 00:14:17.489 "name": "BaseBdev2", 00:14:17.489 "uuid": "8e899138-20a2-40c1-8061-3670033ca6fa", 00:14:17.489 "is_configured": true, 00:14:17.489 "data_offset": 2048, 00:14:17.489 "data_size": 63488 00:14:17.489 }, 00:14:17.489 { 00:14:17.489 "name": "BaseBdev3", 00:14:17.489 "uuid": "466ed7df-caa9-41d5-8966-8f82c5794704", 00:14:17.489 "is_configured": true, 00:14:17.489 "data_offset": 2048, 00:14:17.489 "data_size": 63488 00:14:17.489 }, 00:14:17.489 { 00:14:17.489 "name": "BaseBdev4", 00:14:17.489 "uuid": "78e159a8-3415-4104-94cb-f557be17554f", 00:14:17.489 "is_configured": true, 00:14:17.489 "data_offset": 2048, 00:14:17.489 "data_size": 63488 00:14:17.489 } 00:14:17.489 ] 00:14:17.489 }' 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.489 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.056 [2024-11-20 11:26:25.721900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.056 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.056 [2024-11-20 11:26:25.890609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:18.314 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.315 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.315 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.315 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.315 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.315 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 11:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.315 11:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 [2024-11-20 11:26:26.052303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:18.315 [2024-11-20 11:26:26.052365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.315 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.573 BaseBdev2 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.573 [ 00:14:18.573 { 00:14:18.573 "name": "BaseBdev2", 00:14:18.573 "aliases": [ 00:14:18.573 "4ab6a04d-f2e1-4c11-be92-b3922316efda" 00:14:18.573 ], 00:14:18.573 "product_name": "Malloc disk", 00:14:18.573 "block_size": 512, 00:14:18.573 "num_blocks": 65536, 00:14:18.573 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:18.573 "assigned_rate_limits": { 00:14:18.573 "rw_ios_per_sec": 0, 00:14:18.573 "rw_mbytes_per_sec": 0, 00:14:18.573 "r_mbytes_per_sec": 0, 00:14:18.573 "w_mbytes_per_sec": 0 00:14:18.573 }, 00:14:18.573 "claimed": false, 00:14:18.573 "zoned": false, 00:14:18.573 "supported_io_types": { 00:14:18.573 "read": true, 00:14:18.573 "write": true, 00:14:18.573 "unmap": true, 00:14:18.573 "flush": true, 00:14:18.573 "reset": true, 00:14:18.573 "nvme_admin": false, 00:14:18.573 "nvme_io": false, 00:14:18.573 "nvme_io_md": false, 00:14:18.573 "write_zeroes": true, 00:14:18.573 "zcopy": true, 00:14:18.573 "get_zone_info": false, 00:14:18.573 "zone_management": false, 00:14:18.573 "zone_append": false, 00:14:18.573 "compare": false, 00:14:18.573 "compare_and_write": false, 00:14:18.573 "abort": true, 00:14:18.573 "seek_hole": false, 00:14:18.573 "seek_data": false, 00:14:18.573 "copy": true, 00:14:18.573 "nvme_iov_md": false 00:14:18.573 }, 00:14:18.573 "memory_domains": [ 00:14:18.573 { 00:14:18.573 "dma_device_id": "system", 00:14:18.573 "dma_device_type": 1 00:14:18.573 }, 00:14:18.573 { 00:14:18.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.573 "dma_device_type": 2 00:14:18.573 } 00:14:18.573 ], 00:14:18.573 "driver_specific": {} 00:14:18.573 } 00:14:18.573 ] 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.573 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.574 BaseBdev3 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.574 [ 00:14:18.574 { 00:14:18.574 "name": "BaseBdev3", 00:14:18.574 "aliases": [ 00:14:18.574 "b13332fa-ad6d-4703-a17b-99059627287f" 00:14:18.574 ], 00:14:18.574 "product_name": "Malloc disk", 00:14:18.574 "block_size": 512, 00:14:18.574 "num_blocks": 65536, 00:14:18.574 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:18.574 "assigned_rate_limits": { 00:14:18.574 "rw_ios_per_sec": 0, 00:14:18.574 "rw_mbytes_per_sec": 0, 00:14:18.574 "r_mbytes_per_sec": 0, 00:14:18.574 "w_mbytes_per_sec": 0 00:14:18.574 }, 00:14:18.574 "claimed": false, 00:14:18.574 "zoned": false, 00:14:18.574 "supported_io_types": { 00:14:18.574 "read": true, 00:14:18.574 "write": true, 00:14:18.574 "unmap": true, 00:14:18.574 "flush": true, 00:14:18.574 "reset": true, 00:14:18.574 "nvme_admin": false, 00:14:18.574 "nvme_io": false, 00:14:18.574 "nvme_io_md": false, 00:14:18.574 "write_zeroes": true, 00:14:18.574 "zcopy": true, 00:14:18.574 "get_zone_info": false, 00:14:18.574 "zone_management": false, 00:14:18.574 "zone_append": false, 00:14:18.574 "compare": false, 00:14:18.574 "compare_and_write": false, 00:14:18.574 "abort": true, 00:14:18.574 "seek_hole": false, 00:14:18.574 "seek_data": false, 00:14:18.574 "copy": true, 00:14:18.574 "nvme_iov_md": false 00:14:18.574 }, 00:14:18.574 "memory_domains": [ 00:14:18.574 { 00:14:18.574 "dma_device_id": "system", 00:14:18.574 "dma_device_type": 1 00:14:18.574 }, 00:14:18.574 { 00:14:18.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.574 "dma_device_type": 2 00:14:18.574 } 00:14:18.574 ], 00:14:18.574 "driver_specific": {} 00:14:18.574 } 00:14:18.574 ] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.574 BaseBdev4 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.574 [ 00:14:18.574 { 00:14:18.574 "name": "BaseBdev4", 00:14:18.574 "aliases": [ 00:14:18.574 "fbd12e84-6fbd-4801-870a-0d8b45d31578" 00:14:18.574 ], 00:14:18.574 "product_name": "Malloc disk", 00:14:18.574 "block_size": 512, 00:14:18.574 "num_blocks": 65536, 00:14:18.574 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:18.574 "assigned_rate_limits": { 00:14:18.574 "rw_ios_per_sec": 0, 00:14:18.574 "rw_mbytes_per_sec": 0, 00:14:18.574 "r_mbytes_per_sec": 0, 00:14:18.574 "w_mbytes_per_sec": 0 00:14:18.574 }, 00:14:18.574 "claimed": false, 00:14:18.574 "zoned": false, 00:14:18.574 "supported_io_types": { 00:14:18.574 "read": true, 00:14:18.574 "write": true, 00:14:18.574 "unmap": true, 00:14:18.574 "flush": true, 00:14:18.574 "reset": true, 00:14:18.574 "nvme_admin": false, 00:14:18.574 "nvme_io": false, 00:14:18.574 "nvme_io_md": false, 00:14:18.574 "write_zeroes": true, 00:14:18.574 "zcopy": true, 00:14:18.574 "get_zone_info": false, 00:14:18.574 "zone_management": false, 00:14:18.574 "zone_append": false, 00:14:18.574 "compare": false, 00:14:18.574 "compare_and_write": false, 00:14:18.574 "abort": true, 00:14:18.574 "seek_hole": false, 00:14:18.574 "seek_data": false, 00:14:18.574 "copy": true, 00:14:18.574 "nvme_iov_md": false 00:14:18.574 }, 00:14:18.574 "memory_domains": [ 00:14:18.574 { 00:14:18.574 "dma_device_id": "system", 00:14:18.574 "dma_device_type": 1 00:14:18.574 }, 00:14:18.574 { 00:14:18.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.574 "dma_device_type": 2 00:14:18.574 } 00:14:18.574 ], 00:14:18.574 "driver_specific": {} 00:14:18.574 } 00:14:18.574 ] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.574 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.833 [2024-11-20 11:26:26.417215] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.833 [2024-11-20 11:26:26.417402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.833 [2024-11-20 11:26:26.417537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.833 [2024-11-20 11:26:26.420002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.833 [2024-11-20 11:26:26.420074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.833 "name": "Existed_Raid", 00:14:18.833 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:18.833 "strip_size_kb": 64, 00:14:18.833 "state": "configuring", 00:14:18.833 "raid_level": "raid0", 00:14:18.833 "superblock": true, 00:14:18.833 "num_base_bdevs": 4, 00:14:18.833 "num_base_bdevs_discovered": 3, 00:14:18.833 "num_base_bdevs_operational": 4, 00:14:18.833 "base_bdevs_list": [ 00:14:18.833 { 00:14:18.833 "name": "BaseBdev1", 00:14:18.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.833 "is_configured": false, 00:14:18.833 "data_offset": 0, 00:14:18.833 "data_size": 0 00:14:18.833 }, 00:14:18.833 { 00:14:18.833 "name": "BaseBdev2", 00:14:18.833 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:18.833 "is_configured": true, 00:14:18.833 "data_offset": 2048, 00:14:18.833 "data_size": 63488 00:14:18.833 }, 00:14:18.833 { 00:14:18.833 "name": "BaseBdev3", 00:14:18.833 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:18.833 "is_configured": true, 00:14:18.833 "data_offset": 2048, 00:14:18.833 "data_size": 63488 00:14:18.833 }, 00:14:18.833 { 00:14:18.833 "name": "BaseBdev4", 00:14:18.833 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:18.833 "is_configured": true, 00:14:18.833 "data_offset": 2048, 00:14:18.833 "data_size": 63488 00:14:18.833 } 00:14:18.833 ] 00:14:18.833 }' 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.833 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.158 [2024-11-20 11:26:26.921366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.158 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.418 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.418 "name": "Existed_Raid", 00:14:19.418 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:19.418 "strip_size_kb": 64, 00:14:19.418 "state": "configuring", 00:14:19.418 "raid_level": "raid0", 00:14:19.418 "superblock": true, 00:14:19.418 "num_base_bdevs": 4, 00:14:19.418 "num_base_bdevs_discovered": 2, 00:14:19.418 "num_base_bdevs_operational": 4, 00:14:19.418 "base_bdevs_list": [ 00:14:19.418 { 00:14:19.418 "name": "BaseBdev1", 00:14:19.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.418 "is_configured": false, 00:14:19.418 "data_offset": 0, 00:14:19.418 "data_size": 0 00:14:19.418 }, 00:14:19.418 { 00:14:19.418 "name": null, 00:14:19.418 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:19.418 "is_configured": false, 00:14:19.418 "data_offset": 0, 00:14:19.418 "data_size": 63488 00:14:19.418 }, 00:14:19.418 { 00:14:19.418 "name": "BaseBdev3", 00:14:19.418 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:19.418 "is_configured": true, 00:14:19.418 "data_offset": 2048, 00:14:19.418 "data_size": 63488 00:14:19.418 }, 00:14:19.418 { 00:14:19.418 "name": "BaseBdev4", 00:14:19.418 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:19.418 "is_configured": true, 00:14:19.418 "data_offset": 2048, 00:14:19.418 "data_size": 63488 00:14:19.418 } 00:14:19.418 ] 00:14:19.418 }' 00:14:19.418 11:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.418 11:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.677 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.937 [2024-11-20 11:26:27.539444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.937 BaseBdev1 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.937 [ 00:14:19.937 { 00:14:19.937 "name": "BaseBdev1", 00:14:19.937 "aliases": [ 00:14:19.937 "f89d7818-cfc8-4c98-9f61-134e60ffc8ef" 00:14:19.937 ], 00:14:19.937 "product_name": "Malloc disk", 00:14:19.937 "block_size": 512, 00:14:19.937 "num_blocks": 65536, 00:14:19.937 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:19.937 "assigned_rate_limits": { 00:14:19.937 "rw_ios_per_sec": 0, 00:14:19.937 "rw_mbytes_per_sec": 0, 00:14:19.937 "r_mbytes_per_sec": 0, 00:14:19.937 "w_mbytes_per_sec": 0 00:14:19.937 }, 00:14:19.937 "claimed": true, 00:14:19.937 "claim_type": "exclusive_write", 00:14:19.937 "zoned": false, 00:14:19.937 "supported_io_types": { 00:14:19.937 "read": true, 00:14:19.937 "write": true, 00:14:19.937 "unmap": true, 00:14:19.937 "flush": true, 00:14:19.937 "reset": true, 00:14:19.937 "nvme_admin": false, 00:14:19.937 "nvme_io": false, 00:14:19.937 "nvme_io_md": false, 00:14:19.937 "write_zeroes": true, 00:14:19.937 "zcopy": true, 00:14:19.937 "get_zone_info": false, 00:14:19.937 "zone_management": false, 00:14:19.937 "zone_append": false, 00:14:19.937 "compare": false, 00:14:19.937 "compare_and_write": false, 00:14:19.937 "abort": true, 00:14:19.937 "seek_hole": false, 00:14:19.937 "seek_data": false, 00:14:19.937 "copy": true, 00:14:19.937 "nvme_iov_md": false 00:14:19.937 }, 00:14:19.937 "memory_domains": [ 00:14:19.937 { 00:14:19.937 "dma_device_id": "system", 00:14:19.937 "dma_device_type": 1 00:14:19.937 }, 00:14:19.937 { 00:14:19.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.937 "dma_device_type": 2 00:14:19.937 } 00:14:19.937 ], 00:14:19.937 "driver_specific": {} 00:14:19.937 } 00:14:19.937 ] 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.937 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.938 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.938 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.938 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.938 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.938 "name": "Existed_Raid", 00:14:19.938 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:19.938 "strip_size_kb": 64, 00:14:19.938 "state": "configuring", 00:14:19.938 "raid_level": "raid0", 00:14:19.938 "superblock": true, 00:14:19.938 "num_base_bdevs": 4, 00:14:19.938 "num_base_bdevs_discovered": 3, 00:14:19.938 "num_base_bdevs_operational": 4, 00:14:19.938 "base_bdevs_list": [ 00:14:19.938 { 00:14:19.938 "name": "BaseBdev1", 00:14:19.938 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:19.938 "is_configured": true, 00:14:19.938 "data_offset": 2048, 00:14:19.938 "data_size": 63488 00:14:19.938 }, 00:14:19.938 { 00:14:19.938 "name": null, 00:14:19.938 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:19.938 "is_configured": false, 00:14:19.938 "data_offset": 0, 00:14:19.938 "data_size": 63488 00:14:19.938 }, 00:14:19.938 { 00:14:19.938 "name": "BaseBdev3", 00:14:19.938 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:19.938 "is_configured": true, 00:14:19.938 "data_offset": 2048, 00:14:19.938 "data_size": 63488 00:14:19.938 }, 00:14:19.938 { 00:14:19.938 "name": "BaseBdev4", 00:14:19.938 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:19.938 "is_configured": true, 00:14:19.938 "data_offset": 2048, 00:14:19.938 "data_size": 63488 00:14:19.938 } 00:14:19.938 ] 00:14:19.938 }' 00:14:19.938 11:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.938 11:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.505 [2024-11-20 11:26:28.127738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.505 "name": "Existed_Raid", 00:14:20.505 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:20.505 "strip_size_kb": 64, 00:14:20.505 "state": "configuring", 00:14:20.505 "raid_level": "raid0", 00:14:20.505 "superblock": true, 00:14:20.505 "num_base_bdevs": 4, 00:14:20.505 "num_base_bdevs_discovered": 2, 00:14:20.505 "num_base_bdevs_operational": 4, 00:14:20.505 "base_bdevs_list": [ 00:14:20.505 { 00:14:20.505 "name": "BaseBdev1", 00:14:20.505 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:20.505 "is_configured": true, 00:14:20.505 "data_offset": 2048, 00:14:20.505 "data_size": 63488 00:14:20.505 }, 00:14:20.505 { 00:14:20.505 "name": null, 00:14:20.505 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:20.505 "is_configured": false, 00:14:20.505 "data_offset": 0, 00:14:20.505 "data_size": 63488 00:14:20.505 }, 00:14:20.505 { 00:14:20.505 "name": null, 00:14:20.505 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:20.505 "is_configured": false, 00:14:20.505 "data_offset": 0, 00:14:20.505 "data_size": 63488 00:14:20.505 }, 00:14:20.505 { 00:14:20.505 "name": "BaseBdev4", 00:14:20.505 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:20.505 "is_configured": true, 00:14:20.505 "data_offset": 2048, 00:14:20.505 "data_size": 63488 00:14:20.505 } 00:14:20.505 ] 00:14:20.505 }' 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.505 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.073 [2024-11-20 11:26:28.703914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.073 "name": "Existed_Raid", 00:14:21.073 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:21.073 "strip_size_kb": 64, 00:14:21.073 "state": "configuring", 00:14:21.073 "raid_level": "raid0", 00:14:21.073 "superblock": true, 00:14:21.073 "num_base_bdevs": 4, 00:14:21.073 "num_base_bdevs_discovered": 3, 00:14:21.073 "num_base_bdevs_operational": 4, 00:14:21.073 "base_bdevs_list": [ 00:14:21.073 { 00:14:21.073 "name": "BaseBdev1", 00:14:21.073 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:21.073 "is_configured": true, 00:14:21.073 "data_offset": 2048, 00:14:21.073 "data_size": 63488 00:14:21.073 }, 00:14:21.073 { 00:14:21.073 "name": null, 00:14:21.073 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:21.073 "is_configured": false, 00:14:21.073 "data_offset": 0, 00:14:21.073 "data_size": 63488 00:14:21.073 }, 00:14:21.073 { 00:14:21.073 "name": "BaseBdev3", 00:14:21.073 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:21.073 "is_configured": true, 00:14:21.073 "data_offset": 2048, 00:14:21.073 "data_size": 63488 00:14:21.073 }, 00:14:21.073 { 00:14:21.073 "name": "BaseBdev4", 00:14:21.073 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:21.073 "is_configured": true, 00:14:21.073 "data_offset": 2048, 00:14:21.073 "data_size": 63488 00:14:21.073 } 00:14:21.073 ] 00:14:21.073 }' 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.073 11:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.640 [2024-11-20 11:26:29.256513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.640 "name": "Existed_Raid", 00:14:21.640 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:21.640 "strip_size_kb": 64, 00:14:21.640 "state": "configuring", 00:14:21.640 "raid_level": "raid0", 00:14:21.640 "superblock": true, 00:14:21.640 "num_base_bdevs": 4, 00:14:21.640 "num_base_bdevs_discovered": 2, 00:14:21.640 "num_base_bdevs_operational": 4, 00:14:21.640 "base_bdevs_list": [ 00:14:21.640 { 00:14:21.640 "name": null, 00:14:21.640 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:21.640 "is_configured": false, 00:14:21.640 "data_offset": 0, 00:14:21.640 "data_size": 63488 00:14:21.640 }, 00:14:21.640 { 00:14:21.640 "name": null, 00:14:21.640 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:21.640 "is_configured": false, 00:14:21.640 "data_offset": 0, 00:14:21.640 "data_size": 63488 00:14:21.640 }, 00:14:21.640 { 00:14:21.640 "name": "BaseBdev3", 00:14:21.640 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:21.640 "is_configured": true, 00:14:21.640 "data_offset": 2048, 00:14:21.640 "data_size": 63488 00:14:21.640 }, 00:14:21.640 { 00:14:21.640 "name": "BaseBdev4", 00:14:21.640 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:21.640 "is_configured": true, 00:14:21.640 "data_offset": 2048, 00:14:21.640 "data_size": 63488 00:14:21.640 } 00:14:21.640 ] 00:14:21.640 }' 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.640 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.208 [2024-11-20 11:26:29.913550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.208 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.208 "name": "Existed_Raid", 00:14:22.208 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:22.208 "strip_size_kb": 64, 00:14:22.208 "state": "configuring", 00:14:22.208 "raid_level": "raid0", 00:14:22.209 "superblock": true, 00:14:22.209 "num_base_bdevs": 4, 00:14:22.209 "num_base_bdevs_discovered": 3, 00:14:22.209 "num_base_bdevs_operational": 4, 00:14:22.209 "base_bdevs_list": [ 00:14:22.209 { 00:14:22.209 "name": null, 00:14:22.209 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:22.209 "is_configured": false, 00:14:22.209 "data_offset": 0, 00:14:22.209 "data_size": 63488 00:14:22.209 }, 00:14:22.209 { 00:14:22.209 "name": "BaseBdev2", 00:14:22.209 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:22.209 "is_configured": true, 00:14:22.209 "data_offset": 2048, 00:14:22.209 "data_size": 63488 00:14:22.209 }, 00:14:22.209 { 00:14:22.209 "name": "BaseBdev3", 00:14:22.209 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:22.209 "is_configured": true, 00:14:22.209 "data_offset": 2048, 00:14:22.209 "data_size": 63488 00:14:22.209 }, 00:14:22.209 { 00:14:22.209 "name": "BaseBdev4", 00:14:22.209 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:22.209 "is_configured": true, 00:14:22.209 "data_offset": 2048, 00:14:22.209 "data_size": 63488 00:14:22.209 } 00:14:22.209 ] 00:14:22.209 }' 00:14:22.209 11:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.209 11:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f89d7818-cfc8-4c98-9f61-134e60ffc8ef 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 [2024-11-20 11:26:30.572164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:22.857 [2024-11-20 11:26:30.572460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:22.857 [2024-11-20 11:26:30.572479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:22.857 NewBaseBdev 00:14:22.857 [2024-11-20 11:26:30.572838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:22.857 [2024-11-20 11:26:30.573026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:22.857 [2024-11-20 11:26:30.573048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:22.857 [2024-11-20 11:26:30.573208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.857 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.857 [ 00:14:22.857 { 00:14:22.857 "name": "NewBaseBdev", 00:14:22.857 "aliases": [ 00:14:22.857 "f89d7818-cfc8-4c98-9f61-134e60ffc8ef" 00:14:22.857 ], 00:14:22.857 "product_name": "Malloc disk", 00:14:22.857 "block_size": 512, 00:14:22.857 "num_blocks": 65536, 00:14:22.857 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:22.857 "assigned_rate_limits": { 00:14:22.857 "rw_ios_per_sec": 0, 00:14:22.857 "rw_mbytes_per_sec": 0, 00:14:22.857 "r_mbytes_per_sec": 0, 00:14:22.857 "w_mbytes_per_sec": 0 00:14:22.857 }, 00:14:22.857 "claimed": true, 00:14:22.857 "claim_type": "exclusive_write", 00:14:22.857 "zoned": false, 00:14:22.857 "supported_io_types": { 00:14:22.857 "read": true, 00:14:22.857 "write": true, 00:14:22.857 "unmap": true, 00:14:22.857 "flush": true, 00:14:22.857 "reset": true, 00:14:22.857 "nvme_admin": false, 00:14:22.857 "nvme_io": false, 00:14:22.857 "nvme_io_md": false, 00:14:22.858 "write_zeroes": true, 00:14:22.858 "zcopy": true, 00:14:22.858 "get_zone_info": false, 00:14:22.858 "zone_management": false, 00:14:22.858 "zone_append": false, 00:14:22.858 "compare": false, 00:14:22.858 "compare_and_write": false, 00:14:22.858 "abort": true, 00:14:22.858 "seek_hole": false, 00:14:22.858 "seek_data": false, 00:14:22.858 "copy": true, 00:14:22.858 "nvme_iov_md": false 00:14:22.858 }, 00:14:22.858 "memory_domains": [ 00:14:22.858 { 00:14:22.858 "dma_device_id": "system", 00:14:22.858 "dma_device_type": 1 00:14:22.858 }, 00:14:22.858 { 00:14:22.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.858 "dma_device_type": 2 00:14:22.858 } 00:14:22.858 ], 00:14:22.858 "driver_specific": {} 00:14:22.858 } 00:14:22.858 ] 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.858 "name": "Existed_Raid", 00:14:22.858 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:22.858 "strip_size_kb": 64, 00:14:22.858 "state": "online", 00:14:22.858 "raid_level": "raid0", 00:14:22.858 "superblock": true, 00:14:22.858 "num_base_bdevs": 4, 00:14:22.858 "num_base_bdevs_discovered": 4, 00:14:22.858 "num_base_bdevs_operational": 4, 00:14:22.858 "base_bdevs_list": [ 00:14:22.858 { 00:14:22.858 "name": "NewBaseBdev", 00:14:22.858 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:22.858 "is_configured": true, 00:14:22.858 "data_offset": 2048, 00:14:22.858 "data_size": 63488 00:14:22.858 }, 00:14:22.858 { 00:14:22.858 "name": "BaseBdev2", 00:14:22.858 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:22.858 "is_configured": true, 00:14:22.858 "data_offset": 2048, 00:14:22.858 "data_size": 63488 00:14:22.858 }, 00:14:22.858 { 00:14:22.858 "name": "BaseBdev3", 00:14:22.858 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:22.858 "is_configured": true, 00:14:22.858 "data_offset": 2048, 00:14:22.858 "data_size": 63488 00:14:22.858 }, 00:14:22.858 { 00:14:22.858 "name": "BaseBdev4", 00:14:22.858 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:22.858 "is_configured": true, 00:14:22.858 "data_offset": 2048, 00:14:22.858 "data_size": 63488 00:14:22.858 } 00:14:22.858 ] 00:14:22.858 }' 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.858 11:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.426 [2024-11-20 11:26:31.105002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.426 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.426 "name": "Existed_Raid", 00:14:23.426 "aliases": [ 00:14:23.426 "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6" 00:14:23.426 ], 00:14:23.426 "product_name": "Raid Volume", 00:14:23.426 "block_size": 512, 00:14:23.426 "num_blocks": 253952, 00:14:23.426 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:23.426 "assigned_rate_limits": { 00:14:23.426 "rw_ios_per_sec": 0, 00:14:23.426 "rw_mbytes_per_sec": 0, 00:14:23.426 "r_mbytes_per_sec": 0, 00:14:23.426 "w_mbytes_per_sec": 0 00:14:23.426 }, 00:14:23.426 "claimed": false, 00:14:23.426 "zoned": false, 00:14:23.426 "supported_io_types": { 00:14:23.426 "read": true, 00:14:23.426 "write": true, 00:14:23.426 "unmap": true, 00:14:23.426 "flush": true, 00:14:23.426 "reset": true, 00:14:23.426 "nvme_admin": false, 00:14:23.426 "nvme_io": false, 00:14:23.426 "nvme_io_md": false, 00:14:23.426 "write_zeroes": true, 00:14:23.426 "zcopy": false, 00:14:23.426 "get_zone_info": false, 00:14:23.426 "zone_management": false, 00:14:23.426 "zone_append": false, 00:14:23.426 "compare": false, 00:14:23.426 "compare_and_write": false, 00:14:23.426 "abort": false, 00:14:23.426 "seek_hole": false, 00:14:23.426 "seek_data": false, 00:14:23.426 "copy": false, 00:14:23.426 "nvme_iov_md": false 00:14:23.426 }, 00:14:23.426 "memory_domains": [ 00:14:23.426 { 00:14:23.426 "dma_device_id": "system", 00:14:23.426 "dma_device_type": 1 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.426 "dma_device_type": 2 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "dma_device_id": "system", 00:14:23.426 "dma_device_type": 1 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.426 "dma_device_type": 2 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "dma_device_id": "system", 00:14:23.426 "dma_device_type": 1 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.426 "dma_device_type": 2 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "dma_device_id": "system", 00:14:23.426 "dma_device_type": 1 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.426 "dma_device_type": 2 00:14:23.426 } 00:14:23.426 ], 00:14:23.426 "driver_specific": { 00:14:23.426 "raid": { 00:14:23.426 "uuid": "2bf82c23-b515-4d9e-9941-3be7e6f3fcc6", 00:14:23.426 "strip_size_kb": 64, 00:14:23.426 "state": "online", 00:14:23.426 "raid_level": "raid0", 00:14:23.426 "superblock": true, 00:14:23.426 "num_base_bdevs": 4, 00:14:23.426 "num_base_bdevs_discovered": 4, 00:14:23.426 "num_base_bdevs_operational": 4, 00:14:23.426 "base_bdevs_list": [ 00:14:23.426 { 00:14:23.426 "name": "NewBaseBdev", 00:14:23.426 "uuid": "f89d7818-cfc8-4c98-9f61-134e60ffc8ef", 00:14:23.426 "is_configured": true, 00:14:23.426 "data_offset": 2048, 00:14:23.426 "data_size": 63488 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "name": "BaseBdev2", 00:14:23.426 "uuid": "4ab6a04d-f2e1-4c11-be92-b3922316efda", 00:14:23.426 "is_configured": true, 00:14:23.426 "data_offset": 2048, 00:14:23.426 "data_size": 63488 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "name": "BaseBdev3", 00:14:23.426 "uuid": "b13332fa-ad6d-4703-a17b-99059627287f", 00:14:23.426 "is_configured": true, 00:14:23.426 "data_offset": 2048, 00:14:23.426 "data_size": 63488 00:14:23.426 }, 00:14:23.426 { 00:14:23.426 "name": "BaseBdev4", 00:14:23.426 "uuid": "fbd12e84-6fbd-4801-870a-0d8b45d31578", 00:14:23.426 "is_configured": true, 00:14:23.427 "data_offset": 2048, 00:14:23.427 "data_size": 63488 00:14:23.427 } 00:14:23.427 ] 00:14:23.427 } 00:14:23.427 } 00:14:23.427 }' 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:23.427 BaseBdev2 00:14:23.427 BaseBdev3 00:14:23.427 BaseBdev4' 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.427 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.686 [2024-11-20 11:26:31.500603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.686 [2024-11-20 11:26:31.500658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.686 [2024-11-20 11:26:31.500770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.686 [2024-11-20 11:26:31.500862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.686 [2024-11-20 11:26:31.500879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70067 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70067 ']' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70067 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.686 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70067 00:14:23.944 killing process with pid 70067 00:14:23.945 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.945 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.945 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70067' 00:14:23.945 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70067 00:14:23.945 [2024-11-20 11:26:31.537013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.945 11:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70067 00:14:24.204 [2024-11-20 11:26:31.883758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.140 11:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:25.140 00:14:25.140 real 0m12.897s 00:14:25.140 user 0m21.452s 00:14:25.140 sys 0m1.752s 00:14:25.140 11:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.140 11:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.140 ************************************ 00:14:25.140 END TEST raid_state_function_test_sb 00:14:25.140 ************************************ 00:14:25.140 11:26:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:25.140 11:26:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:25.140 11:26:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.140 11:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.140 ************************************ 00:14:25.140 START TEST raid_superblock_test 00:14:25.140 ************************************ 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70750 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70750 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70750 ']' 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.140 11:26:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.398 [2024-11-20 11:26:33.087897] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:14:25.398 [2024-11-20 11:26:33.088289] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70750 ] 00:14:25.658 [2024-11-20 11:26:33.273035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.658 [2024-11-20 11:26:33.400910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.916 [2024-11-20 11:26:33.607875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.916 [2024-11-20 11:26:33.607915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.484 malloc1 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.484 [2024-11-20 11:26:34.072453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.484 [2024-11-20 11:26:34.072562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.484 [2024-11-20 11:26:34.072613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:26.484 [2024-11-20 11:26:34.072630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.484 [2024-11-20 11:26:34.075736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.484 [2024-11-20 11:26:34.075922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.484 pt1 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.484 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 malloc2 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 [2024-11-20 11:26:34.129227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.485 [2024-11-20 11:26:34.129436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.485 [2024-11-20 11:26:34.129515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:26.485 [2024-11-20 11:26:34.129778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.485 [2024-11-20 11:26:34.132605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.485 [2024-11-20 11:26:34.132790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.485 pt2 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 malloc3 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 [2024-11-20 11:26:34.196906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:26.485 [2024-11-20 11:26:34.196970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.485 [2024-11-20 11:26:34.197003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:26.485 [2024-11-20 11:26:34.197020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.485 [2024-11-20 11:26:34.199717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.485 [2024-11-20 11:26:34.199762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:26.485 pt3 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 malloc4 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 [2024-11-20 11:26:34.249376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:26.485 [2024-11-20 11:26:34.249439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.485 [2024-11-20 11:26:34.249469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:26.485 [2024-11-20 11:26:34.249484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.485 [2024-11-20 11:26:34.252279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.485 [2024-11-20 11:26:34.252326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:26.485 pt4 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 [2024-11-20 11:26:34.261418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.485 [2024-11-20 11:26:34.263888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.485 [2024-11-20 11:26:34.264124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:26.485 [2024-11-20 11:26:34.264235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:26.485 [2024-11-20 11:26:34.264485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:26.485 [2024-11-20 11:26:34.264504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:26.485 [2024-11-20 11:26:34.264876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:26.485 [2024-11-20 11:26:34.265101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:26.485 [2024-11-20 11:26:34.265130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:26.485 [2024-11-20 11:26:34.265357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.485 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.486 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.486 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.486 "name": "raid_bdev1", 00:14:26.486 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:26.486 "strip_size_kb": 64, 00:14:26.486 "state": "online", 00:14:26.486 "raid_level": "raid0", 00:14:26.486 "superblock": true, 00:14:26.486 "num_base_bdevs": 4, 00:14:26.486 "num_base_bdevs_discovered": 4, 00:14:26.486 "num_base_bdevs_operational": 4, 00:14:26.486 "base_bdevs_list": [ 00:14:26.486 { 00:14:26.486 "name": "pt1", 00:14:26.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.486 "is_configured": true, 00:14:26.486 "data_offset": 2048, 00:14:26.486 "data_size": 63488 00:14:26.486 }, 00:14:26.486 { 00:14:26.486 "name": "pt2", 00:14:26.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.486 "is_configured": true, 00:14:26.486 "data_offset": 2048, 00:14:26.486 "data_size": 63488 00:14:26.486 }, 00:14:26.486 { 00:14:26.486 "name": "pt3", 00:14:26.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.486 "is_configured": true, 00:14:26.486 "data_offset": 2048, 00:14:26.486 "data_size": 63488 00:14:26.486 }, 00:14:26.486 { 00:14:26.486 "name": "pt4", 00:14:26.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:26.486 "is_configured": true, 00:14:26.486 "data_offset": 2048, 00:14:26.486 "data_size": 63488 00:14:26.486 } 00:14:26.486 ] 00:14:26.486 }' 00:14:26.486 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.486 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.053 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.054 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.054 [2024-11-20 11:26:34.769994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.054 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.054 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.054 "name": "raid_bdev1", 00:14:27.054 "aliases": [ 00:14:27.054 "116d2314-42ed-430a-8898-063d12a2299f" 00:14:27.054 ], 00:14:27.054 "product_name": "Raid Volume", 00:14:27.054 "block_size": 512, 00:14:27.054 "num_blocks": 253952, 00:14:27.054 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:27.054 "assigned_rate_limits": { 00:14:27.054 "rw_ios_per_sec": 0, 00:14:27.054 "rw_mbytes_per_sec": 0, 00:14:27.054 "r_mbytes_per_sec": 0, 00:14:27.054 "w_mbytes_per_sec": 0 00:14:27.054 }, 00:14:27.054 "claimed": false, 00:14:27.054 "zoned": false, 00:14:27.054 "supported_io_types": { 00:14:27.054 "read": true, 00:14:27.054 "write": true, 00:14:27.054 "unmap": true, 00:14:27.054 "flush": true, 00:14:27.054 "reset": true, 00:14:27.054 "nvme_admin": false, 00:14:27.054 "nvme_io": false, 00:14:27.054 "nvme_io_md": false, 00:14:27.054 "write_zeroes": true, 00:14:27.054 "zcopy": false, 00:14:27.054 "get_zone_info": false, 00:14:27.054 "zone_management": false, 00:14:27.054 "zone_append": false, 00:14:27.054 "compare": false, 00:14:27.054 "compare_and_write": false, 00:14:27.054 "abort": false, 00:14:27.054 "seek_hole": false, 00:14:27.054 "seek_data": false, 00:14:27.054 "copy": false, 00:14:27.054 "nvme_iov_md": false 00:14:27.054 }, 00:14:27.054 "memory_domains": [ 00:14:27.054 { 00:14:27.054 "dma_device_id": "system", 00:14:27.054 "dma_device_type": 1 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.054 "dma_device_type": 2 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "dma_device_id": "system", 00:14:27.054 "dma_device_type": 1 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.054 "dma_device_type": 2 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "dma_device_id": "system", 00:14:27.054 "dma_device_type": 1 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.054 "dma_device_type": 2 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "dma_device_id": "system", 00:14:27.054 "dma_device_type": 1 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.054 "dma_device_type": 2 00:14:27.054 } 00:14:27.054 ], 00:14:27.054 "driver_specific": { 00:14:27.054 "raid": { 00:14:27.054 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:27.054 "strip_size_kb": 64, 00:14:27.054 "state": "online", 00:14:27.054 "raid_level": "raid0", 00:14:27.054 "superblock": true, 00:14:27.054 "num_base_bdevs": 4, 00:14:27.054 "num_base_bdevs_discovered": 4, 00:14:27.054 "num_base_bdevs_operational": 4, 00:14:27.054 "base_bdevs_list": [ 00:14:27.054 { 00:14:27.054 "name": "pt1", 00:14:27.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.054 "is_configured": true, 00:14:27.054 "data_offset": 2048, 00:14:27.054 "data_size": 63488 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "name": "pt2", 00:14:27.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.054 "is_configured": true, 00:14:27.054 "data_offset": 2048, 00:14:27.054 "data_size": 63488 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "name": "pt3", 00:14:27.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.054 "is_configured": true, 00:14:27.054 "data_offset": 2048, 00:14:27.054 "data_size": 63488 00:14:27.054 }, 00:14:27.054 { 00:14:27.054 "name": "pt4", 00:14:27.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.054 "is_configured": true, 00:14:27.054 "data_offset": 2048, 00:14:27.054 "data_size": 63488 00:14:27.054 } 00:14:27.054 ] 00:14:27.054 } 00:14:27.054 } 00:14:27.054 }' 00:14:27.054 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.054 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:27.054 pt2 00:14:27.054 pt3 00:14:27.054 pt4' 00:14:27.054 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 11:26:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:27.314 [2024-11-20 11:26:35.130060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.314 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=116d2314-42ed-430a-8898-063d12a2299f 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 116d2314-42ed-430a-8898-063d12a2299f ']' 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 [2024-11-20 11:26:35.177689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.573 [2024-11-20 11:26:35.177825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.573 [2024-11-20 11:26:35.177937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.573 [2024-11-20 11:26:35.178036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.573 [2024-11-20 11:26:35.178060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.574 [2024-11-20 11:26:35.333774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:27.574 [2024-11-20 11:26:35.336452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:27.574 [2024-11-20 11:26:35.336538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:27.574 [2024-11-20 11:26:35.336599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:27.574 [2024-11-20 11:26:35.336713] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:27.574 [2024-11-20 11:26:35.336781] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:27.574 [2024-11-20 11:26:35.336817] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:27.574 [2024-11-20 11:26:35.336849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:27.574 [2024-11-20 11:26:35.336873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.574 [2024-11-20 11:26:35.336892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:27.574 request: 00:14:27.574 { 00:14:27.574 "name": "raid_bdev1", 00:14:27.574 "raid_level": "raid0", 00:14:27.574 "base_bdevs": [ 00:14:27.574 "malloc1", 00:14:27.574 "malloc2", 00:14:27.574 "malloc3", 00:14:27.574 "malloc4" 00:14:27.574 ], 00:14:27.574 "strip_size_kb": 64, 00:14:27.574 "superblock": false, 00:14:27.574 "method": "bdev_raid_create", 00:14:27.574 "req_id": 1 00:14:27.574 } 00:14:27.574 Got JSON-RPC error response 00:14:27.574 response: 00:14:27.574 { 00:14:27.574 "code": -17, 00:14:27.574 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:27.574 } 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.574 [2024-11-20 11:26:35.401720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.574 [2024-11-20 11:26:35.401901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.574 [2024-11-20 11:26:35.401969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:27.574 [2024-11-20 11:26:35.402134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.574 [2024-11-20 11:26:35.405032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.574 [2024-11-20 11:26:35.405199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.574 [2024-11-20 11:26:35.405395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:27.574 [2024-11-20 11:26:35.405573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.574 pt1 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.574 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.832 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.832 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.832 "name": "raid_bdev1", 00:14:27.832 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:27.832 "strip_size_kb": 64, 00:14:27.832 "state": "configuring", 00:14:27.832 "raid_level": "raid0", 00:14:27.832 "superblock": true, 00:14:27.832 "num_base_bdevs": 4, 00:14:27.832 "num_base_bdevs_discovered": 1, 00:14:27.832 "num_base_bdevs_operational": 4, 00:14:27.832 "base_bdevs_list": [ 00:14:27.832 { 00:14:27.832 "name": "pt1", 00:14:27.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.833 "is_configured": true, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 }, 00:14:27.833 { 00:14:27.833 "name": null, 00:14:27.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.833 "is_configured": false, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 }, 00:14:27.833 { 00:14:27.833 "name": null, 00:14:27.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.833 "is_configured": false, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 }, 00:14:27.833 { 00:14:27.833 "name": null, 00:14:27.833 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:27.833 "is_configured": false, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 } 00:14:27.833 ] 00:14:27.833 }' 00:14:27.833 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.833 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.091 [2024-11-20 11:26:35.914135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.091 [2024-11-20 11:26:35.914411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.091 [2024-11-20 11:26:35.914451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:28.091 [2024-11-20 11:26:35.914472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.091 [2024-11-20 11:26:35.915132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.091 [2024-11-20 11:26:35.915162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.091 [2024-11-20 11:26:35.915266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.091 [2024-11-20 11:26:35.915301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.091 pt2 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.091 [2024-11-20 11:26:35.922114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.091 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.350 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.350 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.350 "name": "raid_bdev1", 00:14:28.350 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:28.350 "strip_size_kb": 64, 00:14:28.350 "state": "configuring", 00:14:28.350 "raid_level": "raid0", 00:14:28.350 "superblock": true, 00:14:28.350 "num_base_bdevs": 4, 00:14:28.350 "num_base_bdevs_discovered": 1, 00:14:28.350 "num_base_bdevs_operational": 4, 00:14:28.350 "base_bdevs_list": [ 00:14:28.350 { 00:14:28.350 "name": "pt1", 00:14:28.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.350 "is_configured": true, 00:14:28.350 "data_offset": 2048, 00:14:28.350 "data_size": 63488 00:14:28.350 }, 00:14:28.350 { 00:14:28.350 "name": null, 00:14:28.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.350 "is_configured": false, 00:14:28.350 "data_offset": 0, 00:14:28.350 "data_size": 63488 00:14:28.350 }, 00:14:28.350 { 00:14:28.350 "name": null, 00:14:28.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.350 "is_configured": false, 00:14:28.350 "data_offset": 2048, 00:14:28.350 "data_size": 63488 00:14:28.350 }, 00:14:28.350 { 00:14:28.350 "name": null, 00:14:28.350 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.350 "is_configured": false, 00:14:28.350 "data_offset": 2048, 00:14:28.350 "data_size": 63488 00:14:28.350 } 00:14:28.350 ] 00:14:28.350 }' 00:14:28.350 11:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.350 11:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.609 [2024-11-20 11:26:36.446288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.609 [2024-11-20 11:26:36.446542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.609 [2024-11-20 11:26:36.446587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:28.609 [2024-11-20 11:26:36.446604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.609 [2024-11-20 11:26:36.447209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.609 [2024-11-20 11:26:36.447234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.609 [2024-11-20 11:26:36.447336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.609 [2024-11-20 11:26:36.447368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.609 pt2 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.609 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.868 [2024-11-20 11:26:36.458268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:28.868 [2024-11-20 11:26:36.458339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.868 [2024-11-20 11:26:36.458373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:28.868 [2024-11-20 11:26:36.458389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.868 [2024-11-20 11:26:36.458859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.869 [2024-11-20 11:26:36.458906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:28.869 [2024-11-20 11:26:36.458987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:28.869 [2024-11-20 11:26:36.459014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:28.869 pt3 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.869 [2024-11-20 11:26:36.466263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:28.869 [2024-11-20 11:26:36.466336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.869 [2024-11-20 11:26:36.466365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:28.869 [2024-11-20 11:26:36.466379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.869 [2024-11-20 11:26:36.466877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.869 [2024-11-20 11:26:36.466908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:28.869 [2024-11-20 11:26:36.466990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:28.869 [2024-11-20 11:26:36.467049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:28.869 [2024-11-20 11:26:36.467212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.869 [2024-11-20 11:26:36.467228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:28.869 [2024-11-20 11:26:36.467533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:28.869 [2024-11-20 11:26:36.467752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.869 [2024-11-20 11:26:36.467783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:28.869 [2024-11-20 11:26:36.467939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.869 pt4 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.869 "name": "raid_bdev1", 00:14:28.869 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:28.869 "strip_size_kb": 64, 00:14:28.869 "state": "online", 00:14:28.869 "raid_level": "raid0", 00:14:28.869 "superblock": true, 00:14:28.869 "num_base_bdevs": 4, 00:14:28.869 "num_base_bdevs_discovered": 4, 00:14:28.869 "num_base_bdevs_operational": 4, 00:14:28.869 "base_bdevs_list": [ 00:14:28.869 { 00:14:28.869 "name": "pt1", 00:14:28.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "pt2", 00:14:28.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "pt3", 00:14:28.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 }, 00:14:28.869 { 00:14:28.869 "name": "pt4", 00:14:28.869 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.869 "is_configured": true, 00:14:28.869 "data_offset": 2048, 00:14:28.869 "data_size": 63488 00:14:28.869 } 00:14:28.869 ] 00:14:28.869 }' 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.869 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.437 11:26:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 [2024-11-20 11:26:36.998890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.437 "name": "raid_bdev1", 00:14:29.437 "aliases": [ 00:14:29.437 "116d2314-42ed-430a-8898-063d12a2299f" 00:14:29.437 ], 00:14:29.437 "product_name": "Raid Volume", 00:14:29.437 "block_size": 512, 00:14:29.437 "num_blocks": 253952, 00:14:29.437 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:29.437 "assigned_rate_limits": { 00:14:29.437 "rw_ios_per_sec": 0, 00:14:29.437 "rw_mbytes_per_sec": 0, 00:14:29.437 "r_mbytes_per_sec": 0, 00:14:29.437 "w_mbytes_per_sec": 0 00:14:29.437 }, 00:14:29.437 "claimed": false, 00:14:29.437 "zoned": false, 00:14:29.437 "supported_io_types": { 00:14:29.437 "read": true, 00:14:29.437 "write": true, 00:14:29.437 "unmap": true, 00:14:29.437 "flush": true, 00:14:29.437 "reset": true, 00:14:29.437 "nvme_admin": false, 00:14:29.437 "nvme_io": false, 00:14:29.437 "nvme_io_md": false, 00:14:29.437 "write_zeroes": true, 00:14:29.437 "zcopy": false, 00:14:29.437 "get_zone_info": false, 00:14:29.437 "zone_management": false, 00:14:29.437 "zone_append": false, 00:14:29.437 "compare": false, 00:14:29.437 "compare_and_write": false, 00:14:29.437 "abort": false, 00:14:29.437 "seek_hole": false, 00:14:29.437 "seek_data": false, 00:14:29.437 "copy": false, 00:14:29.437 "nvme_iov_md": false 00:14:29.437 }, 00:14:29.437 "memory_domains": [ 00:14:29.437 { 00:14:29.437 "dma_device_id": "system", 00:14:29.437 "dma_device_type": 1 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.437 "dma_device_type": 2 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "dma_device_id": "system", 00:14:29.437 "dma_device_type": 1 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.437 "dma_device_type": 2 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "dma_device_id": "system", 00:14:29.437 "dma_device_type": 1 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.437 "dma_device_type": 2 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "dma_device_id": "system", 00:14:29.437 "dma_device_type": 1 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.437 "dma_device_type": 2 00:14:29.437 } 00:14:29.437 ], 00:14:29.437 "driver_specific": { 00:14:29.437 "raid": { 00:14:29.437 "uuid": "116d2314-42ed-430a-8898-063d12a2299f", 00:14:29.437 "strip_size_kb": 64, 00:14:29.437 "state": "online", 00:14:29.437 "raid_level": "raid0", 00:14:29.437 "superblock": true, 00:14:29.437 "num_base_bdevs": 4, 00:14:29.437 "num_base_bdevs_discovered": 4, 00:14:29.437 "num_base_bdevs_operational": 4, 00:14:29.437 "base_bdevs_list": [ 00:14:29.437 { 00:14:29.437 "name": "pt1", 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.437 "is_configured": true, 00:14:29.437 "data_offset": 2048, 00:14:29.437 "data_size": 63488 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "name": "pt2", 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.437 "is_configured": true, 00:14:29.437 "data_offset": 2048, 00:14:29.437 "data_size": 63488 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "name": "pt3", 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.437 "is_configured": true, 00:14:29.437 "data_offset": 2048, 00:14:29.437 "data_size": 63488 00:14:29.437 }, 00:14:29.437 { 00:14:29.437 "name": "pt4", 00:14:29.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.437 "is_configured": true, 00:14:29.437 "data_offset": 2048, 00:14:29.437 "data_size": 63488 00:14:29.437 } 00:14:29.437 ] 00:14:29.437 } 00:14:29.437 } 00:14:29.437 }' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:29.437 pt2 00:14:29.437 pt3 00:14:29.437 pt4' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:29.437 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.438 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.438 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.438 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.724 [2024-11-20 11:26:37.370870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 116d2314-42ed-430a-8898-063d12a2299f '!=' 116d2314-42ed-430a-8898-063d12a2299f ']' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70750 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70750 ']' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70750 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70750 00:14:29.724 killing process with pid 70750 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70750' 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70750 00:14:29.724 [2024-11-20 11:26:37.450827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.724 [2024-11-20 11:26:37.450922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.724 11:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70750 00:14:29.724 [2024-11-20 11:26:37.451031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.724 [2024-11-20 11:26:37.451047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:30.010 [2024-11-20 11:26:37.803191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.386 11:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:31.386 00:14:31.386 real 0m5.853s 00:14:31.386 user 0m8.786s 00:14:31.386 sys 0m0.870s 00:14:31.386 ************************************ 00:14:31.386 END TEST raid_superblock_test 00:14:31.386 ************************************ 00:14:31.386 11:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.386 11:26:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.386 11:26:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:31.386 11:26:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:31.386 11:26:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.386 11:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.386 ************************************ 00:14:31.386 START TEST raid_read_error_test 00:14:31.386 ************************************ 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZcsPVh59nq 00:14:31.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71017 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71017 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71017 ']' 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.386 11:26:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.386 [2024-11-20 11:26:38.981931] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:14:31.386 [2024-11-20 11:26:38.982249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71017 ] 00:14:31.386 [2024-11-20 11:26:39.159636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.647 [2024-11-20 11:26:39.290255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.906 [2024-11-20 11:26:39.495085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.906 [2024-11-20 11:26:39.495374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.474 BaseBdev1_malloc 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.474 true 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.474 [2024-11-20 11:26:40.072155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:32.474 [2024-11-20 11:26:40.072223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.474 [2024-11-20 11:26:40.072253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:32.474 [2024-11-20 11:26:40.072271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.474 [2024-11-20 11:26:40.075099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.474 [2024-11-20 11:26:40.075282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.474 BaseBdev1 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.474 BaseBdev2_malloc 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.474 true 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.474 [2024-11-20 11:26:40.137042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:32.474 [2024-11-20 11:26:40.137110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.474 [2024-11-20 11:26:40.137136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:32.474 [2024-11-20 11:26:40.137153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.474 [2024-11-20 11:26:40.140031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.474 [2024-11-20 11:26:40.140076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:32.474 BaseBdev2 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.474 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 BaseBdev3_malloc 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 true 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 [2024-11-20 11:26:40.209109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:32.475 [2024-11-20 11:26:40.209172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.475 [2024-11-20 11:26:40.209198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:32.475 [2024-11-20 11:26:40.209216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.475 [2024-11-20 11:26:40.212088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.475 [2024-11-20 11:26:40.212261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:32.475 BaseBdev3 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 BaseBdev4_malloc 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 true 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 [2024-11-20 11:26:40.270157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:32.475 [2024-11-20 11:26:40.270401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.475 [2024-11-20 11:26:40.270439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:32.475 [2024-11-20 11:26:40.270458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.475 [2024-11-20 11:26:40.273336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.475 [2024-11-20 11:26:40.273398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:32.475 BaseBdev4 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 [2024-11-20 11:26:40.278284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.475 [2024-11-20 11:26:40.280888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.475 [2024-11-20 11:26:40.281166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.475 [2024-11-20 11:26:40.281428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:32.475 [2024-11-20 11:26:40.281902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:32.475 [2024-11-20 11:26:40.282079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:32.475 [2024-11-20 11:26:40.282464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:32.475 [2024-11-20 11:26:40.282841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:32.475 [2024-11-20 11:26:40.282971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:32.475 [2024-11-20 11:26:40.283344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.475 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.734 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.734 "name": "raid_bdev1", 00:14:32.734 "uuid": "f76d4ea2-54bf-4588-bf36-8c7a62c67f92", 00:14:32.734 "strip_size_kb": 64, 00:14:32.734 "state": "online", 00:14:32.734 "raid_level": "raid0", 00:14:32.734 "superblock": true, 00:14:32.734 "num_base_bdevs": 4, 00:14:32.734 "num_base_bdevs_discovered": 4, 00:14:32.734 "num_base_bdevs_operational": 4, 00:14:32.734 "base_bdevs_list": [ 00:14:32.734 { 00:14:32.734 "name": "BaseBdev1", 00:14:32.734 "uuid": "60e226d1-ace3-5bdb-b131-a2aefbef9936", 00:14:32.734 "is_configured": true, 00:14:32.734 "data_offset": 2048, 00:14:32.734 "data_size": 63488 00:14:32.734 }, 00:14:32.734 { 00:14:32.734 "name": "BaseBdev2", 00:14:32.734 "uuid": "c0618a5c-7fbf-5d30-b98a-001101bba675", 00:14:32.734 "is_configured": true, 00:14:32.734 "data_offset": 2048, 00:14:32.734 "data_size": 63488 00:14:32.734 }, 00:14:32.734 { 00:14:32.734 "name": "BaseBdev3", 00:14:32.734 "uuid": "73d21d2b-7d4c-511f-ab99-64bbf2c355df", 00:14:32.734 "is_configured": true, 00:14:32.734 "data_offset": 2048, 00:14:32.734 "data_size": 63488 00:14:32.734 }, 00:14:32.734 { 00:14:32.734 "name": "BaseBdev4", 00:14:32.734 "uuid": "f3a6e344-a649-569a-9b33-f7d84a8deda5", 00:14:32.734 "is_configured": true, 00:14:32.734 "data_offset": 2048, 00:14:32.734 "data_size": 63488 00:14:32.734 } 00:14:32.734 ] 00:14:32.734 }' 00:14:32.734 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.734 11:26:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:32.993 11:26:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:33.252 [2024-11-20 11:26:40.908860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.238 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.238 "name": "raid_bdev1", 00:14:34.238 "uuid": "f76d4ea2-54bf-4588-bf36-8c7a62c67f92", 00:14:34.238 "strip_size_kb": 64, 00:14:34.238 "state": "online", 00:14:34.238 "raid_level": "raid0", 00:14:34.238 "superblock": true, 00:14:34.238 "num_base_bdevs": 4, 00:14:34.238 "num_base_bdevs_discovered": 4, 00:14:34.239 "num_base_bdevs_operational": 4, 00:14:34.239 "base_bdevs_list": [ 00:14:34.239 { 00:14:34.239 "name": "BaseBdev1", 00:14:34.239 "uuid": "60e226d1-ace3-5bdb-b131-a2aefbef9936", 00:14:34.239 "is_configured": true, 00:14:34.239 "data_offset": 2048, 00:14:34.239 "data_size": 63488 00:14:34.239 }, 00:14:34.239 { 00:14:34.239 "name": "BaseBdev2", 00:14:34.239 "uuid": "c0618a5c-7fbf-5d30-b98a-001101bba675", 00:14:34.239 "is_configured": true, 00:14:34.239 "data_offset": 2048, 00:14:34.239 "data_size": 63488 00:14:34.239 }, 00:14:34.239 { 00:14:34.239 "name": "BaseBdev3", 00:14:34.239 "uuid": "73d21d2b-7d4c-511f-ab99-64bbf2c355df", 00:14:34.239 "is_configured": true, 00:14:34.239 "data_offset": 2048, 00:14:34.239 "data_size": 63488 00:14:34.239 }, 00:14:34.239 { 00:14:34.239 "name": "BaseBdev4", 00:14:34.239 "uuid": "f3a6e344-a649-569a-9b33-f7d84a8deda5", 00:14:34.239 "is_configured": true, 00:14:34.239 "data_offset": 2048, 00:14:34.239 "data_size": 63488 00:14:34.239 } 00:14:34.239 ] 00:14:34.239 }' 00:14:34.239 11:26:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.239 11:26:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.497 11:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.497 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.497 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.497 [2024-11-20 11:26:42.307996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.497 [2024-11-20 11:26:42.308077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.497 [2024-11-20 11:26:42.311489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.497 [2024-11-20 11:26:42.311560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.497 [2024-11-20 11:26:42.311617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.497 [2024-11-20 11:26:42.311667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:34.497 { 00:14:34.498 "results": [ 00:14:34.498 { 00:14:34.498 "job": "raid_bdev1", 00:14:34.498 "core_mask": "0x1", 00:14:34.498 "workload": "randrw", 00:14:34.498 "percentage": 50, 00:14:34.498 "status": "finished", 00:14:34.498 "queue_depth": 1, 00:14:34.498 "io_size": 131072, 00:14:34.498 "runtime": 1.396668, 00:14:34.498 "iops": 10651.063817600174, 00:14:34.498 "mibps": 1331.3829772000217, 00:14:34.498 "io_failed": 1, 00:14:34.498 "io_timeout": 0, 00:14:34.498 "avg_latency_us": 130.93966598837744, 00:14:34.498 "min_latency_us": 39.56363636363636, 00:14:34.498 "max_latency_us": 1846.9236363636364 00:14:34.498 } 00:14:34.498 ], 00:14:34.498 "core_count": 1 00:14:34.498 } 00:14:34.498 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.498 11:26:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71017 00:14:34.498 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71017 ']' 00:14:34.498 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71017 00:14:34.498 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:34.498 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.498 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71017 00:14:34.756 killing process with pid 71017 00:14:34.756 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.756 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.756 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71017' 00:14:34.756 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71017 00:14:34.756 [2024-11-20 11:26:42.348418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.756 11:26:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71017 00:14:35.014 [2024-11-20 11:26:42.640713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZcsPVh59nq 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:35.951 ************************************ 00:14:35.951 END TEST raid_read_error_test 00:14:35.951 ************************************ 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:35.951 00:14:35.951 real 0m4.879s 00:14:35.951 user 0m6.010s 00:14:35.951 sys 0m0.601s 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.951 11:26:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.210 11:26:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:36.210 11:26:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:36.210 11:26:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.210 11:26:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.210 ************************************ 00:14:36.210 START TEST raid_write_error_test 00:14:36.210 ************************************ 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FEN6EALAob 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71170 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71170 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71170 ']' 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.210 11:26:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.210 [2024-11-20 11:26:43.937360] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:14:36.210 [2024-11-20 11:26:43.937537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71170 ] 00:14:36.470 [2024-11-20 11:26:44.124653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.470 [2024-11-20 11:26:44.256975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.729 [2024-11-20 11:26:44.463400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.729 [2024-11-20 11:26:44.463449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.296 11:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.296 11:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:37.296 11:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:37.296 11:26:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:37.296 11:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.296 11:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 BaseBdev1_malloc 00:14:37.296 11:26:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 true 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 [2024-11-20 11:26:45.016525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:37.296 [2024-11-20 11:26:45.016596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.296 [2024-11-20 11:26:45.016645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:37.296 [2024-11-20 11:26:45.016667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.296 [2024-11-20 11:26:45.019510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.296 [2024-11-20 11:26:45.019718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.296 BaseBdev1 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 BaseBdev2_malloc 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 true 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.296 [2024-11-20 11:26:45.077492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:37.296 [2024-11-20 11:26:45.077560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.296 [2024-11-20 11:26:45.077587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:37.296 [2024-11-20 11:26:45.077606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.296 [2024-11-20 11:26:45.080402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.296 [2024-11-20 11:26:45.080454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:37.296 BaseBdev2 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.296 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:37.297 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:37.297 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.297 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.554 BaseBdev3_malloc 00:14:37.554 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.554 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:37.554 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.554 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.554 true 00:14:37.554 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.554 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 [2024-11-20 11:26:45.151663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:37.555 [2024-11-20 11:26:45.151755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.555 [2024-11-20 11:26:45.151784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:37.555 [2024-11-20 11:26:45.151802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.555 [2024-11-20 11:26:45.154604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.555 [2024-11-20 11:26:45.154845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.555 BaseBdev3 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 BaseBdev4_malloc 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 true 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 [2024-11-20 11:26:45.208599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:37.555 [2024-11-20 11:26:45.208681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.555 [2024-11-20 11:26:45.208709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:37.555 [2024-11-20 11:26:45.208728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.555 [2024-11-20 11:26:45.211490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.555 [2024-11-20 11:26:45.211544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:37.555 BaseBdev4 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 [2024-11-20 11:26:45.216698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.555 [2024-11-20 11:26:45.219512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.555 [2024-11-20 11:26:45.219686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.555 [2024-11-20 11:26:45.219815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.555 [2024-11-20 11:26:45.220136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:37.555 [2024-11-20 11:26:45.220175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:37.555 [2024-11-20 11:26:45.220479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:37.555 [2024-11-20 11:26:45.220791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:37.555 [2024-11-20 11:26:45.220813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:37.555 [2024-11-20 11:26:45.221062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.555 "name": "raid_bdev1", 00:14:37.555 "uuid": "6b054e0f-3b9a-421e-914d-0af0ca44ce7c", 00:14:37.555 "strip_size_kb": 64, 00:14:37.555 "state": "online", 00:14:37.555 "raid_level": "raid0", 00:14:37.555 "superblock": true, 00:14:37.555 "num_base_bdevs": 4, 00:14:37.555 "num_base_bdevs_discovered": 4, 00:14:37.555 "num_base_bdevs_operational": 4, 00:14:37.555 "base_bdevs_list": [ 00:14:37.555 { 00:14:37.555 "name": "BaseBdev1", 00:14:37.555 "uuid": "ec7cd399-84df-5e1f-b124-fb77d2e00456", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 2048, 00:14:37.555 "data_size": 63488 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev2", 00:14:37.555 "uuid": "7a74991f-407f-5472-9ebc-fd184ef54197", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 2048, 00:14:37.555 "data_size": 63488 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev3", 00:14:37.555 "uuid": "68a17172-1078-5ea3-8b91-5c9794afc92c", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 2048, 00:14:37.555 "data_size": 63488 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev4", 00:14:37.555 "uuid": "8b398cbc-590e-5a21-bb7f-2422d05187e2", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 2048, 00:14:37.555 "data_size": 63488 00:14:37.555 } 00:14:37.555 ] 00:14:37.555 }' 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.555 11:26:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.168 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:38.168 11:26:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:38.168 [2024-11-20 11:26:45.858751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.102 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.103 "name": "raid_bdev1", 00:14:39.103 "uuid": "6b054e0f-3b9a-421e-914d-0af0ca44ce7c", 00:14:39.103 "strip_size_kb": 64, 00:14:39.103 "state": "online", 00:14:39.103 "raid_level": "raid0", 00:14:39.103 "superblock": true, 00:14:39.103 "num_base_bdevs": 4, 00:14:39.103 "num_base_bdevs_discovered": 4, 00:14:39.103 "num_base_bdevs_operational": 4, 00:14:39.103 "base_bdevs_list": [ 00:14:39.103 { 00:14:39.103 "name": "BaseBdev1", 00:14:39.103 "uuid": "ec7cd399-84df-5e1f-b124-fb77d2e00456", 00:14:39.103 "is_configured": true, 00:14:39.103 "data_offset": 2048, 00:14:39.103 "data_size": 63488 00:14:39.103 }, 00:14:39.103 { 00:14:39.103 "name": "BaseBdev2", 00:14:39.103 "uuid": "7a74991f-407f-5472-9ebc-fd184ef54197", 00:14:39.103 "is_configured": true, 00:14:39.103 "data_offset": 2048, 00:14:39.103 "data_size": 63488 00:14:39.103 }, 00:14:39.103 { 00:14:39.103 "name": "BaseBdev3", 00:14:39.103 "uuid": "68a17172-1078-5ea3-8b91-5c9794afc92c", 00:14:39.103 "is_configured": true, 00:14:39.103 "data_offset": 2048, 00:14:39.103 "data_size": 63488 00:14:39.103 }, 00:14:39.103 { 00:14:39.103 "name": "BaseBdev4", 00:14:39.103 "uuid": "8b398cbc-590e-5a21-bb7f-2422d05187e2", 00:14:39.103 "is_configured": true, 00:14:39.103 "data_offset": 2048, 00:14:39.103 "data_size": 63488 00:14:39.103 } 00:14:39.103 ] 00:14:39.103 }' 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.103 11:26:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.669 [2024-11-20 11:26:47.226473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.669 [2024-11-20 11:26:47.226512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.669 [2024-11-20 11:26:47.230100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.669 [2024-11-20 11:26:47.230333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.669 [2024-11-20 11:26:47.230450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.669 [2024-11-20 11:26:47.230672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:39.669 { 00:14:39.669 "results": [ 00:14:39.669 { 00:14:39.669 "job": "raid_bdev1", 00:14:39.669 "core_mask": "0x1", 00:14:39.669 "workload": "randrw", 00:14:39.669 "percentage": 50, 00:14:39.669 "status": "finished", 00:14:39.669 "queue_depth": 1, 00:14:39.669 "io_size": 131072, 00:14:39.669 "runtime": 1.364978, 00:14:39.669 "iops": 10534.235716619609, 00:14:39.669 "mibps": 1316.779464577451, 00:14:39.669 "io_failed": 1, 00:14:39.669 "io_timeout": 0, 00:14:39.669 "avg_latency_us": 132.37793501074722, 00:14:39.669 "min_latency_us": 39.56363636363636, 00:14:39.669 "max_latency_us": 1817.1345454545456 00:14:39.669 } 00:14:39.669 ], 00:14:39.669 "core_count": 1 00:14:39.669 } 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71170 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71170 ']' 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71170 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71170 00:14:39.669 killing process with pid 71170 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71170' 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71170 00:14:39.669 [2024-11-20 11:26:47.266053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.669 11:26:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71170 00:14:39.927 [2024-11-20 11:26:47.558317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FEN6EALAob 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:40.863 ************************************ 00:14:40.863 END TEST raid_write_error_test 00:14:40.863 ************************************ 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:14:40.863 00:14:40.863 real 0m4.862s 00:14:40.863 user 0m5.973s 00:14:40.863 sys 0m0.626s 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.863 11:26:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.121 11:26:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:41.121 11:26:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:41.121 11:26:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:41.121 11:26:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.121 11:26:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.121 ************************************ 00:14:41.121 START TEST raid_state_function_test 00:14:41.121 ************************************ 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.121 Process raid pid: 71315 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71315 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71315' 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71315 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71315 ']' 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.121 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.121 [2024-11-20 11:26:48.825902] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:14:41.121 [2024-11-20 11:26:48.826237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.380 [2024-11-20 11:26:49.004791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.380 [2024-11-20 11:26:49.147204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.637 [2024-11-20 11:26:49.361664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.637 [2024-11-20 11:26:49.361921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.205 [2024-11-20 11:26:49.878356] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.205 [2024-11-20 11:26:49.878419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.205 [2024-11-20 11:26:49.878437] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.205 [2024-11-20 11:26:49.878463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.205 [2024-11-20 11:26:49.878473] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.205 [2024-11-20 11:26:49.878487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.205 [2024-11-20 11:26:49.878497] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.205 [2024-11-20 11:26:49.878511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.205 "name": "Existed_Raid", 00:14:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.205 "strip_size_kb": 64, 00:14:42.205 "state": "configuring", 00:14:42.205 "raid_level": "concat", 00:14:42.205 "superblock": false, 00:14:42.205 "num_base_bdevs": 4, 00:14:42.205 "num_base_bdevs_discovered": 0, 00:14:42.205 "num_base_bdevs_operational": 4, 00:14:42.205 "base_bdevs_list": [ 00:14:42.205 { 00:14:42.205 "name": "BaseBdev1", 00:14:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.205 "is_configured": false, 00:14:42.205 "data_offset": 0, 00:14:42.205 "data_size": 0 00:14:42.205 }, 00:14:42.205 { 00:14:42.205 "name": "BaseBdev2", 00:14:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.205 "is_configured": false, 00:14:42.205 "data_offset": 0, 00:14:42.205 "data_size": 0 00:14:42.205 }, 00:14:42.205 { 00:14:42.205 "name": "BaseBdev3", 00:14:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.205 "is_configured": false, 00:14:42.205 "data_offset": 0, 00:14:42.205 "data_size": 0 00:14:42.205 }, 00:14:42.205 { 00:14:42.205 "name": "BaseBdev4", 00:14:42.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.205 "is_configured": false, 00:14:42.205 "data_offset": 0, 00:14:42.205 "data_size": 0 00:14:42.205 } 00:14:42.205 ] 00:14:42.205 }' 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.205 11:26:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 [2024-11-20 11:26:50.402487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.772 [2024-11-20 11:26:50.402535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 [2024-11-20 11:26:50.410472] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.772 [2024-11-20 11:26:50.410524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.772 [2024-11-20 11:26:50.410539] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.772 [2024-11-20 11:26:50.410556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.772 [2024-11-20 11:26:50.410565] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.772 [2024-11-20 11:26:50.410580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.772 [2024-11-20 11:26:50.410589] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.772 [2024-11-20 11:26:50.410603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.772 [2024-11-20 11:26:50.455867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.772 BaseBdev1 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.772 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.773 [ 00:14:42.773 { 00:14:42.773 "name": "BaseBdev1", 00:14:42.773 "aliases": [ 00:14:42.773 "abefeeeb-437e-4eda-bcaf-4c7594eba534" 00:14:42.773 ], 00:14:42.773 "product_name": "Malloc disk", 00:14:42.773 "block_size": 512, 00:14:42.773 "num_blocks": 65536, 00:14:42.773 "uuid": "abefeeeb-437e-4eda-bcaf-4c7594eba534", 00:14:42.773 "assigned_rate_limits": { 00:14:42.773 "rw_ios_per_sec": 0, 00:14:42.773 "rw_mbytes_per_sec": 0, 00:14:42.773 "r_mbytes_per_sec": 0, 00:14:42.773 "w_mbytes_per_sec": 0 00:14:42.773 }, 00:14:42.773 "claimed": true, 00:14:42.773 "claim_type": "exclusive_write", 00:14:42.773 "zoned": false, 00:14:42.773 "supported_io_types": { 00:14:42.773 "read": true, 00:14:42.773 "write": true, 00:14:42.773 "unmap": true, 00:14:42.773 "flush": true, 00:14:42.773 "reset": true, 00:14:42.773 "nvme_admin": false, 00:14:42.773 "nvme_io": false, 00:14:42.773 "nvme_io_md": false, 00:14:42.773 "write_zeroes": true, 00:14:42.773 "zcopy": true, 00:14:42.773 "get_zone_info": false, 00:14:42.773 "zone_management": false, 00:14:42.773 "zone_append": false, 00:14:42.773 "compare": false, 00:14:42.773 "compare_and_write": false, 00:14:42.773 "abort": true, 00:14:42.773 "seek_hole": false, 00:14:42.773 "seek_data": false, 00:14:42.773 "copy": true, 00:14:42.773 "nvme_iov_md": false 00:14:42.773 }, 00:14:42.773 "memory_domains": [ 00:14:42.773 { 00:14:42.773 "dma_device_id": "system", 00:14:42.773 "dma_device_type": 1 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.773 "dma_device_type": 2 00:14:42.773 } 00:14:42.773 ], 00:14:42.773 "driver_specific": {} 00:14:42.773 } 00:14:42.773 ] 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.773 "name": "Existed_Raid", 00:14:42.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.773 "strip_size_kb": 64, 00:14:42.773 "state": "configuring", 00:14:42.773 "raid_level": "concat", 00:14:42.773 "superblock": false, 00:14:42.773 "num_base_bdevs": 4, 00:14:42.773 "num_base_bdevs_discovered": 1, 00:14:42.773 "num_base_bdevs_operational": 4, 00:14:42.773 "base_bdevs_list": [ 00:14:42.773 { 00:14:42.773 "name": "BaseBdev1", 00:14:42.773 "uuid": "abefeeeb-437e-4eda-bcaf-4c7594eba534", 00:14:42.773 "is_configured": true, 00:14:42.773 "data_offset": 0, 00:14:42.773 "data_size": 65536 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "name": "BaseBdev2", 00:14:42.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.773 "is_configured": false, 00:14:42.773 "data_offset": 0, 00:14:42.773 "data_size": 0 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "name": "BaseBdev3", 00:14:42.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.773 "is_configured": false, 00:14:42.773 "data_offset": 0, 00:14:42.773 "data_size": 0 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "name": "BaseBdev4", 00:14:42.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.773 "is_configured": false, 00:14:42.773 "data_offset": 0, 00:14:42.773 "data_size": 0 00:14:42.773 } 00:14:42.773 ] 00:14:42.773 }' 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.773 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.341 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.341 11:26:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 [2024-11-20 11:26:51.000110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.341 [2024-11-20 11:26:51.000328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 [2024-11-20 11:26:51.012162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.341 [2024-11-20 11:26:51.014758] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.341 [2024-11-20 11:26:51.014811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.341 [2024-11-20 11:26:51.014828] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.341 [2024-11-20 11:26:51.014847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.341 [2024-11-20 11:26:51.014857] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:43.341 [2024-11-20 11:26:51.014871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.341 "name": "Existed_Raid", 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.341 "strip_size_kb": 64, 00:14:43.341 "state": "configuring", 00:14:43.341 "raid_level": "concat", 00:14:43.341 "superblock": false, 00:14:43.341 "num_base_bdevs": 4, 00:14:43.341 "num_base_bdevs_discovered": 1, 00:14:43.341 "num_base_bdevs_operational": 4, 00:14:43.341 "base_bdevs_list": [ 00:14:43.341 { 00:14:43.341 "name": "BaseBdev1", 00:14:43.341 "uuid": "abefeeeb-437e-4eda-bcaf-4c7594eba534", 00:14:43.341 "is_configured": true, 00:14:43.341 "data_offset": 0, 00:14:43.341 "data_size": 65536 00:14:43.341 }, 00:14:43.341 { 00:14:43.341 "name": "BaseBdev2", 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.341 "is_configured": false, 00:14:43.341 "data_offset": 0, 00:14:43.341 "data_size": 0 00:14:43.341 }, 00:14:43.341 { 00:14:43.341 "name": "BaseBdev3", 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.341 "is_configured": false, 00:14:43.341 "data_offset": 0, 00:14:43.341 "data_size": 0 00:14:43.341 }, 00:14:43.341 { 00:14:43.341 "name": "BaseBdev4", 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.341 "is_configured": false, 00:14:43.341 "data_offset": 0, 00:14:43.341 "data_size": 0 00:14:43.341 } 00:14:43.341 ] 00:14:43.341 }' 00:14:43.341 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.342 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.908 [2024-11-20 11:26:51.567964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.908 BaseBdev2 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.908 [ 00:14:43.908 { 00:14:43.908 "name": "BaseBdev2", 00:14:43.908 "aliases": [ 00:14:43.908 "2a14b4d7-ec82-4e7b-8086-487bb6b8df7f" 00:14:43.908 ], 00:14:43.908 "product_name": "Malloc disk", 00:14:43.908 "block_size": 512, 00:14:43.908 "num_blocks": 65536, 00:14:43.908 "uuid": "2a14b4d7-ec82-4e7b-8086-487bb6b8df7f", 00:14:43.908 "assigned_rate_limits": { 00:14:43.908 "rw_ios_per_sec": 0, 00:14:43.908 "rw_mbytes_per_sec": 0, 00:14:43.908 "r_mbytes_per_sec": 0, 00:14:43.908 "w_mbytes_per_sec": 0 00:14:43.908 }, 00:14:43.908 "claimed": true, 00:14:43.908 "claim_type": "exclusive_write", 00:14:43.908 "zoned": false, 00:14:43.908 "supported_io_types": { 00:14:43.908 "read": true, 00:14:43.908 "write": true, 00:14:43.908 "unmap": true, 00:14:43.908 "flush": true, 00:14:43.908 "reset": true, 00:14:43.908 "nvme_admin": false, 00:14:43.908 "nvme_io": false, 00:14:43.908 "nvme_io_md": false, 00:14:43.908 "write_zeroes": true, 00:14:43.908 "zcopy": true, 00:14:43.908 "get_zone_info": false, 00:14:43.908 "zone_management": false, 00:14:43.908 "zone_append": false, 00:14:43.908 "compare": false, 00:14:43.908 "compare_and_write": false, 00:14:43.908 "abort": true, 00:14:43.908 "seek_hole": false, 00:14:43.908 "seek_data": false, 00:14:43.908 "copy": true, 00:14:43.908 "nvme_iov_md": false 00:14:43.908 }, 00:14:43.908 "memory_domains": [ 00:14:43.908 { 00:14:43.908 "dma_device_id": "system", 00:14:43.908 "dma_device_type": 1 00:14:43.908 }, 00:14:43.908 { 00:14:43.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.908 "dma_device_type": 2 00:14:43.908 } 00:14:43.908 ], 00:14:43.908 "driver_specific": {} 00:14:43.908 } 00:14:43.908 ] 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.908 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.909 "name": "Existed_Raid", 00:14:43.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.909 "strip_size_kb": 64, 00:14:43.909 "state": "configuring", 00:14:43.909 "raid_level": "concat", 00:14:43.909 "superblock": false, 00:14:43.909 "num_base_bdevs": 4, 00:14:43.909 "num_base_bdevs_discovered": 2, 00:14:43.909 "num_base_bdevs_operational": 4, 00:14:43.909 "base_bdevs_list": [ 00:14:43.909 { 00:14:43.909 "name": "BaseBdev1", 00:14:43.909 "uuid": "abefeeeb-437e-4eda-bcaf-4c7594eba534", 00:14:43.909 "is_configured": true, 00:14:43.909 "data_offset": 0, 00:14:43.909 "data_size": 65536 00:14:43.909 }, 00:14:43.909 { 00:14:43.909 "name": "BaseBdev2", 00:14:43.909 "uuid": "2a14b4d7-ec82-4e7b-8086-487bb6b8df7f", 00:14:43.909 "is_configured": true, 00:14:43.909 "data_offset": 0, 00:14:43.909 "data_size": 65536 00:14:43.909 }, 00:14:43.909 { 00:14:43.909 "name": "BaseBdev3", 00:14:43.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.909 "is_configured": false, 00:14:43.909 "data_offset": 0, 00:14:43.909 "data_size": 0 00:14:43.909 }, 00:14:43.909 { 00:14:43.909 "name": "BaseBdev4", 00:14:43.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.909 "is_configured": false, 00:14:43.909 "data_offset": 0, 00:14:43.909 "data_size": 0 00:14:43.909 } 00:14:43.909 ] 00:14:43.909 }' 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.909 11:26:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.476 [2024-11-20 11:26:52.168774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.476 BaseBdev3 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.476 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.476 [ 00:14:44.476 { 00:14:44.476 "name": "BaseBdev3", 00:14:44.476 "aliases": [ 00:14:44.476 "2b243be7-38dc-4e46-90bc-0fc0caee03fa" 00:14:44.476 ], 00:14:44.476 "product_name": "Malloc disk", 00:14:44.476 "block_size": 512, 00:14:44.476 "num_blocks": 65536, 00:14:44.476 "uuid": "2b243be7-38dc-4e46-90bc-0fc0caee03fa", 00:14:44.476 "assigned_rate_limits": { 00:14:44.476 "rw_ios_per_sec": 0, 00:14:44.476 "rw_mbytes_per_sec": 0, 00:14:44.476 "r_mbytes_per_sec": 0, 00:14:44.476 "w_mbytes_per_sec": 0 00:14:44.476 }, 00:14:44.476 "claimed": true, 00:14:44.476 "claim_type": "exclusive_write", 00:14:44.476 "zoned": false, 00:14:44.476 "supported_io_types": { 00:14:44.476 "read": true, 00:14:44.476 "write": true, 00:14:44.476 "unmap": true, 00:14:44.476 "flush": true, 00:14:44.476 "reset": true, 00:14:44.476 "nvme_admin": false, 00:14:44.476 "nvme_io": false, 00:14:44.476 "nvme_io_md": false, 00:14:44.476 "write_zeroes": true, 00:14:44.476 "zcopy": true, 00:14:44.477 "get_zone_info": false, 00:14:44.477 "zone_management": false, 00:14:44.477 "zone_append": false, 00:14:44.477 "compare": false, 00:14:44.477 "compare_and_write": false, 00:14:44.477 "abort": true, 00:14:44.477 "seek_hole": false, 00:14:44.477 "seek_data": false, 00:14:44.477 "copy": true, 00:14:44.477 "nvme_iov_md": false 00:14:44.477 }, 00:14:44.477 "memory_domains": [ 00:14:44.477 { 00:14:44.477 "dma_device_id": "system", 00:14:44.477 "dma_device_type": 1 00:14:44.477 }, 00:14:44.477 { 00:14:44.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.477 "dma_device_type": 2 00:14:44.477 } 00:14:44.477 ], 00:14:44.477 "driver_specific": {} 00:14:44.477 } 00:14:44.477 ] 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.477 "name": "Existed_Raid", 00:14:44.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.477 "strip_size_kb": 64, 00:14:44.477 "state": "configuring", 00:14:44.477 "raid_level": "concat", 00:14:44.477 "superblock": false, 00:14:44.477 "num_base_bdevs": 4, 00:14:44.477 "num_base_bdevs_discovered": 3, 00:14:44.477 "num_base_bdevs_operational": 4, 00:14:44.477 "base_bdevs_list": [ 00:14:44.477 { 00:14:44.477 "name": "BaseBdev1", 00:14:44.477 "uuid": "abefeeeb-437e-4eda-bcaf-4c7594eba534", 00:14:44.477 "is_configured": true, 00:14:44.477 "data_offset": 0, 00:14:44.477 "data_size": 65536 00:14:44.477 }, 00:14:44.477 { 00:14:44.477 "name": "BaseBdev2", 00:14:44.477 "uuid": "2a14b4d7-ec82-4e7b-8086-487bb6b8df7f", 00:14:44.477 "is_configured": true, 00:14:44.477 "data_offset": 0, 00:14:44.477 "data_size": 65536 00:14:44.477 }, 00:14:44.477 { 00:14:44.477 "name": "BaseBdev3", 00:14:44.477 "uuid": "2b243be7-38dc-4e46-90bc-0fc0caee03fa", 00:14:44.477 "is_configured": true, 00:14:44.477 "data_offset": 0, 00:14:44.477 "data_size": 65536 00:14:44.477 }, 00:14:44.477 { 00:14:44.477 "name": "BaseBdev4", 00:14:44.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.477 "is_configured": false, 00:14:44.477 "data_offset": 0, 00:14:44.477 "data_size": 0 00:14:44.477 } 00:14:44.477 ] 00:14:44.477 }' 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.477 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 [2024-11-20 11:26:52.748508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.046 [2024-11-20 11:26:52.748570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:45.046 [2024-11-20 11:26:52.748584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:45.046 [2024-11-20 11:26:52.748941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:45.046 [2024-11-20 11:26:52.749161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:45.046 [2024-11-20 11:26:52.749191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:45.046 [2024-11-20 11:26:52.749507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.046 BaseBdev4 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 [ 00:14:45.046 { 00:14:45.046 "name": "BaseBdev4", 00:14:45.046 "aliases": [ 00:14:45.046 "93bd7dfb-6b37-42c7-9910-8cfb9565f2f0" 00:14:45.046 ], 00:14:45.046 "product_name": "Malloc disk", 00:14:45.046 "block_size": 512, 00:14:45.046 "num_blocks": 65536, 00:14:45.046 "uuid": "93bd7dfb-6b37-42c7-9910-8cfb9565f2f0", 00:14:45.046 "assigned_rate_limits": { 00:14:45.046 "rw_ios_per_sec": 0, 00:14:45.046 "rw_mbytes_per_sec": 0, 00:14:45.046 "r_mbytes_per_sec": 0, 00:14:45.046 "w_mbytes_per_sec": 0 00:14:45.046 }, 00:14:45.046 "claimed": true, 00:14:45.046 "claim_type": "exclusive_write", 00:14:45.046 "zoned": false, 00:14:45.046 "supported_io_types": { 00:14:45.046 "read": true, 00:14:45.046 "write": true, 00:14:45.046 "unmap": true, 00:14:45.046 "flush": true, 00:14:45.046 "reset": true, 00:14:45.046 "nvme_admin": false, 00:14:45.046 "nvme_io": false, 00:14:45.046 "nvme_io_md": false, 00:14:45.046 "write_zeroes": true, 00:14:45.046 "zcopy": true, 00:14:45.046 "get_zone_info": false, 00:14:45.046 "zone_management": false, 00:14:45.046 "zone_append": false, 00:14:45.046 "compare": false, 00:14:45.046 "compare_and_write": false, 00:14:45.046 "abort": true, 00:14:45.046 "seek_hole": false, 00:14:45.046 "seek_data": false, 00:14:45.046 "copy": true, 00:14:45.046 "nvme_iov_md": false 00:14:45.046 }, 00:14:45.046 "memory_domains": [ 00:14:45.046 { 00:14:45.046 "dma_device_id": "system", 00:14:45.046 "dma_device_type": 1 00:14:45.046 }, 00:14:45.046 { 00:14:45.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.046 "dma_device_type": 2 00:14:45.046 } 00:14:45.046 ], 00:14:45.046 "driver_specific": {} 00:14:45.046 } 00:14:45.046 ] 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.046 "name": "Existed_Raid", 00:14:45.046 "uuid": "6a370d34-3a67-40ae-afef-d938dca16f45", 00:14:45.046 "strip_size_kb": 64, 00:14:45.046 "state": "online", 00:14:45.046 "raid_level": "concat", 00:14:45.046 "superblock": false, 00:14:45.046 "num_base_bdevs": 4, 00:14:45.046 "num_base_bdevs_discovered": 4, 00:14:45.046 "num_base_bdevs_operational": 4, 00:14:45.046 "base_bdevs_list": [ 00:14:45.046 { 00:14:45.046 "name": "BaseBdev1", 00:14:45.046 "uuid": "abefeeeb-437e-4eda-bcaf-4c7594eba534", 00:14:45.046 "is_configured": true, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 }, 00:14:45.046 { 00:14:45.046 "name": "BaseBdev2", 00:14:45.046 "uuid": "2a14b4d7-ec82-4e7b-8086-487bb6b8df7f", 00:14:45.046 "is_configured": true, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 }, 00:14:45.046 { 00:14:45.046 "name": "BaseBdev3", 00:14:45.046 "uuid": "2b243be7-38dc-4e46-90bc-0fc0caee03fa", 00:14:45.046 "is_configured": true, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 }, 00:14:45.046 { 00:14:45.046 "name": "BaseBdev4", 00:14:45.046 "uuid": "93bd7dfb-6b37-42c7-9910-8cfb9565f2f0", 00:14:45.046 "is_configured": true, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 } 00:14:45.046 ] 00:14:45.046 }' 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.046 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:45.614 [2024-11-20 11:26:53.309160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.614 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.614 "name": "Existed_Raid", 00:14:45.614 "aliases": [ 00:14:45.615 "6a370d34-3a67-40ae-afef-d938dca16f45" 00:14:45.615 ], 00:14:45.615 "product_name": "Raid Volume", 00:14:45.615 "block_size": 512, 00:14:45.615 "num_blocks": 262144, 00:14:45.615 "uuid": "6a370d34-3a67-40ae-afef-d938dca16f45", 00:14:45.615 "assigned_rate_limits": { 00:14:45.615 "rw_ios_per_sec": 0, 00:14:45.615 "rw_mbytes_per_sec": 0, 00:14:45.615 "r_mbytes_per_sec": 0, 00:14:45.615 "w_mbytes_per_sec": 0 00:14:45.615 }, 00:14:45.615 "claimed": false, 00:14:45.615 "zoned": false, 00:14:45.615 "supported_io_types": { 00:14:45.615 "read": true, 00:14:45.615 "write": true, 00:14:45.615 "unmap": true, 00:14:45.615 "flush": true, 00:14:45.615 "reset": true, 00:14:45.615 "nvme_admin": false, 00:14:45.615 "nvme_io": false, 00:14:45.615 "nvme_io_md": false, 00:14:45.615 "write_zeroes": true, 00:14:45.615 "zcopy": false, 00:14:45.615 "get_zone_info": false, 00:14:45.615 "zone_management": false, 00:14:45.615 "zone_append": false, 00:14:45.615 "compare": false, 00:14:45.615 "compare_and_write": false, 00:14:45.615 "abort": false, 00:14:45.615 "seek_hole": false, 00:14:45.615 "seek_data": false, 00:14:45.615 "copy": false, 00:14:45.615 "nvme_iov_md": false 00:14:45.615 }, 00:14:45.615 "memory_domains": [ 00:14:45.615 { 00:14:45.615 "dma_device_id": "system", 00:14:45.615 "dma_device_type": 1 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.615 "dma_device_type": 2 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "system", 00:14:45.615 "dma_device_type": 1 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.615 "dma_device_type": 2 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "system", 00:14:45.615 "dma_device_type": 1 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.615 "dma_device_type": 2 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "system", 00:14:45.615 "dma_device_type": 1 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.615 "dma_device_type": 2 00:14:45.615 } 00:14:45.615 ], 00:14:45.615 "driver_specific": { 00:14:45.615 "raid": { 00:14:45.615 "uuid": "6a370d34-3a67-40ae-afef-d938dca16f45", 00:14:45.615 "strip_size_kb": 64, 00:14:45.615 "state": "online", 00:14:45.615 "raid_level": "concat", 00:14:45.615 "superblock": false, 00:14:45.615 "num_base_bdevs": 4, 00:14:45.615 "num_base_bdevs_discovered": 4, 00:14:45.615 "num_base_bdevs_operational": 4, 00:14:45.615 "base_bdevs_list": [ 00:14:45.615 { 00:14:45.615 "name": "BaseBdev1", 00:14:45.615 "uuid": "abefeeeb-437e-4eda-bcaf-4c7594eba534", 00:14:45.615 "is_configured": true, 00:14:45.615 "data_offset": 0, 00:14:45.615 "data_size": 65536 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "name": "BaseBdev2", 00:14:45.615 "uuid": "2a14b4d7-ec82-4e7b-8086-487bb6b8df7f", 00:14:45.615 "is_configured": true, 00:14:45.615 "data_offset": 0, 00:14:45.615 "data_size": 65536 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "name": "BaseBdev3", 00:14:45.615 "uuid": "2b243be7-38dc-4e46-90bc-0fc0caee03fa", 00:14:45.615 "is_configured": true, 00:14:45.615 "data_offset": 0, 00:14:45.615 "data_size": 65536 00:14:45.615 }, 00:14:45.615 { 00:14:45.615 "name": "BaseBdev4", 00:14:45.615 "uuid": "93bd7dfb-6b37-42c7-9910-8cfb9565f2f0", 00:14:45.615 "is_configured": true, 00:14:45.615 "data_offset": 0, 00:14:45.615 "data_size": 65536 00:14:45.615 } 00:14:45.615 ] 00:14:45.615 } 00:14:45.615 } 00:14:45.615 }' 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:45.615 BaseBdev2 00:14:45.615 BaseBdev3 00:14:45.615 BaseBdev4' 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.615 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.874 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.875 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.875 [2024-11-20 11:26:53.684934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.875 [2024-11-20 11:26:53.684976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.875 [2024-11-20 11:26:53.685054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.134 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.134 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.135 "name": "Existed_Raid", 00:14:46.135 "uuid": "6a370d34-3a67-40ae-afef-d938dca16f45", 00:14:46.135 "strip_size_kb": 64, 00:14:46.135 "state": "offline", 00:14:46.135 "raid_level": "concat", 00:14:46.135 "superblock": false, 00:14:46.135 "num_base_bdevs": 4, 00:14:46.135 "num_base_bdevs_discovered": 3, 00:14:46.135 "num_base_bdevs_operational": 3, 00:14:46.135 "base_bdevs_list": [ 00:14:46.135 { 00:14:46.135 "name": null, 00:14:46.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.135 "is_configured": false, 00:14:46.135 "data_offset": 0, 00:14:46.135 "data_size": 65536 00:14:46.135 }, 00:14:46.135 { 00:14:46.135 "name": "BaseBdev2", 00:14:46.135 "uuid": "2a14b4d7-ec82-4e7b-8086-487bb6b8df7f", 00:14:46.135 "is_configured": true, 00:14:46.135 "data_offset": 0, 00:14:46.135 "data_size": 65536 00:14:46.135 }, 00:14:46.135 { 00:14:46.135 "name": "BaseBdev3", 00:14:46.135 "uuid": "2b243be7-38dc-4e46-90bc-0fc0caee03fa", 00:14:46.135 "is_configured": true, 00:14:46.135 "data_offset": 0, 00:14:46.135 "data_size": 65536 00:14:46.135 }, 00:14:46.135 { 00:14:46.135 "name": "BaseBdev4", 00:14:46.135 "uuid": "93bd7dfb-6b37-42c7-9910-8cfb9565f2f0", 00:14:46.135 "is_configured": true, 00:14:46.135 "data_offset": 0, 00:14:46.135 "data_size": 65536 00:14:46.135 } 00:14:46.135 ] 00:14:46.135 }' 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.135 11:26:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.701 [2024-11-20 11:26:54.340037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.701 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.701 [2024-11-20 11:26:54.485771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.022 [2024-11-20 11:26:54.630361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:47.022 [2024-11-20 11:26:54.630422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.022 BaseBdev2 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.022 [ 00:14:47.022 { 00:14:47.022 "name": "BaseBdev2", 00:14:47.022 "aliases": [ 00:14:47.022 "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60" 00:14:47.022 ], 00:14:47.022 "product_name": "Malloc disk", 00:14:47.022 "block_size": 512, 00:14:47.022 "num_blocks": 65536, 00:14:47.022 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:47.022 "assigned_rate_limits": { 00:14:47.022 "rw_ios_per_sec": 0, 00:14:47.022 "rw_mbytes_per_sec": 0, 00:14:47.022 "r_mbytes_per_sec": 0, 00:14:47.022 "w_mbytes_per_sec": 0 00:14:47.022 }, 00:14:47.022 "claimed": false, 00:14:47.022 "zoned": false, 00:14:47.022 "supported_io_types": { 00:14:47.022 "read": true, 00:14:47.022 "write": true, 00:14:47.022 "unmap": true, 00:14:47.022 "flush": true, 00:14:47.022 "reset": true, 00:14:47.022 "nvme_admin": false, 00:14:47.022 "nvme_io": false, 00:14:47.022 "nvme_io_md": false, 00:14:47.022 "write_zeroes": true, 00:14:47.022 "zcopy": true, 00:14:47.022 "get_zone_info": false, 00:14:47.022 "zone_management": false, 00:14:47.022 "zone_append": false, 00:14:47.022 "compare": false, 00:14:47.022 "compare_and_write": false, 00:14:47.022 "abort": true, 00:14:47.022 "seek_hole": false, 00:14:47.022 "seek_data": false, 00:14:47.022 "copy": true, 00:14:47.022 "nvme_iov_md": false 00:14:47.022 }, 00:14:47.022 "memory_domains": [ 00:14:47.022 { 00:14:47.022 "dma_device_id": "system", 00:14:47.022 "dma_device_type": 1 00:14:47.022 }, 00:14:47.022 { 00:14:47.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.022 "dma_device_type": 2 00:14:47.022 } 00:14:47.022 ], 00:14:47.022 "driver_specific": {} 00:14:47.022 } 00:14:47.022 ] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.022 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.282 BaseBdev3 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.282 [ 00:14:47.282 { 00:14:47.282 "name": "BaseBdev3", 00:14:47.282 "aliases": [ 00:14:47.282 "3286202a-e945-48a6-9530-2c57cfce47eb" 00:14:47.282 ], 00:14:47.282 "product_name": "Malloc disk", 00:14:47.282 "block_size": 512, 00:14:47.282 "num_blocks": 65536, 00:14:47.282 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:47.282 "assigned_rate_limits": { 00:14:47.282 "rw_ios_per_sec": 0, 00:14:47.282 "rw_mbytes_per_sec": 0, 00:14:47.282 "r_mbytes_per_sec": 0, 00:14:47.282 "w_mbytes_per_sec": 0 00:14:47.282 }, 00:14:47.282 "claimed": false, 00:14:47.282 "zoned": false, 00:14:47.282 "supported_io_types": { 00:14:47.282 "read": true, 00:14:47.282 "write": true, 00:14:47.282 "unmap": true, 00:14:47.282 "flush": true, 00:14:47.282 "reset": true, 00:14:47.282 "nvme_admin": false, 00:14:47.282 "nvme_io": false, 00:14:47.282 "nvme_io_md": false, 00:14:47.282 "write_zeroes": true, 00:14:47.282 "zcopy": true, 00:14:47.282 "get_zone_info": false, 00:14:47.282 "zone_management": false, 00:14:47.282 "zone_append": false, 00:14:47.282 "compare": false, 00:14:47.282 "compare_and_write": false, 00:14:47.282 "abort": true, 00:14:47.282 "seek_hole": false, 00:14:47.282 "seek_data": false, 00:14:47.282 "copy": true, 00:14:47.282 "nvme_iov_md": false 00:14:47.282 }, 00:14:47.282 "memory_domains": [ 00:14:47.282 { 00:14:47.282 "dma_device_id": "system", 00:14:47.282 "dma_device_type": 1 00:14:47.282 }, 00:14:47.282 { 00:14:47.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.282 "dma_device_type": 2 00:14:47.282 } 00:14:47.282 ], 00:14:47.282 "driver_specific": {} 00:14:47.282 } 00:14:47.282 ] 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.282 BaseBdev4 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.282 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.283 [ 00:14:47.283 { 00:14:47.283 "name": "BaseBdev4", 00:14:47.283 "aliases": [ 00:14:47.283 "b609025f-eb56-43f7-9a0f-e184c5db84d0" 00:14:47.283 ], 00:14:47.283 "product_name": "Malloc disk", 00:14:47.283 "block_size": 512, 00:14:47.283 "num_blocks": 65536, 00:14:47.283 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:47.283 "assigned_rate_limits": { 00:14:47.283 "rw_ios_per_sec": 0, 00:14:47.283 "rw_mbytes_per_sec": 0, 00:14:47.283 "r_mbytes_per_sec": 0, 00:14:47.283 "w_mbytes_per_sec": 0 00:14:47.283 }, 00:14:47.283 "claimed": false, 00:14:47.283 "zoned": false, 00:14:47.283 "supported_io_types": { 00:14:47.283 "read": true, 00:14:47.283 "write": true, 00:14:47.283 "unmap": true, 00:14:47.283 "flush": true, 00:14:47.283 "reset": true, 00:14:47.283 "nvme_admin": false, 00:14:47.283 "nvme_io": false, 00:14:47.283 "nvme_io_md": false, 00:14:47.283 "write_zeroes": true, 00:14:47.283 "zcopy": true, 00:14:47.283 "get_zone_info": false, 00:14:47.283 "zone_management": false, 00:14:47.283 "zone_append": false, 00:14:47.283 "compare": false, 00:14:47.283 "compare_and_write": false, 00:14:47.283 "abort": true, 00:14:47.283 "seek_hole": false, 00:14:47.283 "seek_data": false, 00:14:47.283 "copy": true, 00:14:47.283 "nvme_iov_md": false 00:14:47.283 }, 00:14:47.283 "memory_domains": [ 00:14:47.283 { 00:14:47.283 "dma_device_id": "system", 00:14:47.283 "dma_device_type": 1 00:14:47.283 }, 00:14:47.283 { 00:14:47.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.283 "dma_device_type": 2 00:14:47.283 } 00:14:47.283 ], 00:14:47.283 "driver_specific": {} 00:14:47.283 } 00:14:47.283 ] 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.283 [2024-11-20 11:26:54.991506] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.283 [2024-11-20 11:26:54.991561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.283 [2024-11-20 11:26:54.991593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.283 [2024-11-20 11:26:54.993960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.283 [2024-11-20 11:26:54.994036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.283 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.283 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.283 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.283 "name": "Existed_Raid", 00:14:47.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.283 "strip_size_kb": 64, 00:14:47.283 "state": "configuring", 00:14:47.283 "raid_level": "concat", 00:14:47.283 "superblock": false, 00:14:47.283 "num_base_bdevs": 4, 00:14:47.283 "num_base_bdevs_discovered": 3, 00:14:47.283 "num_base_bdevs_operational": 4, 00:14:47.283 "base_bdevs_list": [ 00:14:47.283 { 00:14:47.283 "name": "BaseBdev1", 00:14:47.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.283 "is_configured": false, 00:14:47.283 "data_offset": 0, 00:14:47.283 "data_size": 0 00:14:47.283 }, 00:14:47.283 { 00:14:47.283 "name": "BaseBdev2", 00:14:47.283 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:47.283 "is_configured": true, 00:14:47.283 "data_offset": 0, 00:14:47.283 "data_size": 65536 00:14:47.283 }, 00:14:47.283 { 00:14:47.283 "name": "BaseBdev3", 00:14:47.283 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:47.283 "is_configured": true, 00:14:47.283 "data_offset": 0, 00:14:47.283 "data_size": 65536 00:14:47.283 }, 00:14:47.283 { 00:14:47.283 "name": "BaseBdev4", 00:14:47.283 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:47.283 "is_configured": true, 00:14:47.283 "data_offset": 0, 00:14:47.283 "data_size": 65536 00:14:47.283 } 00:14:47.283 ] 00:14:47.283 }' 00:14:47.283 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.283 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.851 [2024-11-20 11:26:55.519662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.851 "name": "Existed_Raid", 00:14:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.851 "strip_size_kb": 64, 00:14:47.851 "state": "configuring", 00:14:47.851 "raid_level": "concat", 00:14:47.851 "superblock": false, 00:14:47.851 "num_base_bdevs": 4, 00:14:47.851 "num_base_bdevs_discovered": 2, 00:14:47.851 "num_base_bdevs_operational": 4, 00:14:47.851 "base_bdevs_list": [ 00:14:47.851 { 00:14:47.851 "name": "BaseBdev1", 00:14:47.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.851 "is_configured": false, 00:14:47.851 "data_offset": 0, 00:14:47.851 "data_size": 0 00:14:47.851 }, 00:14:47.851 { 00:14:47.851 "name": null, 00:14:47.851 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:47.851 "is_configured": false, 00:14:47.851 "data_offset": 0, 00:14:47.851 "data_size": 65536 00:14:47.851 }, 00:14:47.851 { 00:14:47.851 "name": "BaseBdev3", 00:14:47.851 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:47.851 "is_configured": true, 00:14:47.851 "data_offset": 0, 00:14:47.851 "data_size": 65536 00:14:47.851 }, 00:14:47.851 { 00:14:47.851 "name": "BaseBdev4", 00:14:47.851 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:47.851 "is_configured": true, 00:14:47.851 "data_offset": 0, 00:14:47.851 "data_size": 65536 00:14:47.851 } 00:14:47.851 ] 00:14:47.851 }' 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.851 11:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 [2024-11-20 11:26:56.126107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.419 BaseBdev1 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.419 [ 00:14:48.419 { 00:14:48.419 "name": "BaseBdev1", 00:14:48.419 "aliases": [ 00:14:48.419 "c25f2bf5-0826-4033-be68-fe3a2a1b98b8" 00:14:48.419 ], 00:14:48.419 "product_name": "Malloc disk", 00:14:48.419 "block_size": 512, 00:14:48.419 "num_blocks": 65536, 00:14:48.419 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:48.419 "assigned_rate_limits": { 00:14:48.419 "rw_ios_per_sec": 0, 00:14:48.419 "rw_mbytes_per_sec": 0, 00:14:48.419 "r_mbytes_per_sec": 0, 00:14:48.419 "w_mbytes_per_sec": 0 00:14:48.419 }, 00:14:48.419 "claimed": true, 00:14:48.419 "claim_type": "exclusive_write", 00:14:48.419 "zoned": false, 00:14:48.419 "supported_io_types": { 00:14:48.419 "read": true, 00:14:48.419 "write": true, 00:14:48.419 "unmap": true, 00:14:48.419 "flush": true, 00:14:48.419 "reset": true, 00:14:48.419 "nvme_admin": false, 00:14:48.419 "nvme_io": false, 00:14:48.419 "nvme_io_md": false, 00:14:48.419 "write_zeroes": true, 00:14:48.419 "zcopy": true, 00:14:48.419 "get_zone_info": false, 00:14:48.419 "zone_management": false, 00:14:48.419 "zone_append": false, 00:14:48.419 "compare": false, 00:14:48.419 "compare_and_write": false, 00:14:48.419 "abort": true, 00:14:48.419 "seek_hole": false, 00:14:48.419 "seek_data": false, 00:14:48.419 "copy": true, 00:14:48.419 "nvme_iov_md": false 00:14:48.419 }, 00:14:48.419 "memory_domains": [ 00:14:48.419 { 00:14:48.419 "dma_device_id": "system", 00:14:48.419 "dma_device_type": 1 00:14:48.419 }, 00:14:48.419 { 00:14:48.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.419 "dma_device_type": 2 00:14:48.419 } 00:14:48.419 ], 00:14:48.419 "driver_specific": {} 00:14:48.419 } 00:14:48.419 ] 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.419 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.420 "name": "Existed_Raid", 00:14:48.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.420 "strip_size_kb": 64, 00:14:48.420 "state": "configuring", 00:14:48.420 "raid_level": "concat", 00:14:48.420 "superblock": false, 00:14:48.420 "num_base_bdevs": 4, 00:14:48.420 "num_base_bdevs_discovered": 3, 00:14:48.420 "num_base_bdevs_operational": 4, 00:14:48.420 "base_bdevs_list": [ 00:14:48.420 { 00:14:48.420 "name": "BaseBdev1", 00:14:48.420 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:48.420 "is_configured": true, 00:14:48.420 "data_offset": 0, 00:14:48.420 "data_size": 65536 00:14:48.420 }, 00:14:48.420 { 00:14:48.420 "name": null, 00:14:48.420 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:48.420 "is_configured": false, 00:14:48.420 "data_offset": 0, 00:14:48.420 "data_size": 65536 00:14:48.420 }, 00:14:48.420 { 00:14:48.420 "name": "BaseBdev3", 00:14:48.420 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:48.420 "is_configured": true, 00:14:48.420 "data_offset": 0, 00:14:48.420 "data_size": 65536 00:14:48.420 }, 00:14:48.420 { 00:14:48.420 "name": "BaseBdev4", 00:14:48.420 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:48.420 "is_configured": true, 00:14:48.420 "data_offset": 0, 00:14:48.420 "data_size": 65536 00:14:48.420 } 00:14:48.420 ] 00:14:48.420 }' 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.420 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.988 [2024-11-20 11:26:56.698332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.988 "name": "Existed_Raid", 00:14:48.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.988 "strip_size_kb": 64, 00:14:48.988 "state": "configuring", 00:14:48.988 "raid_level": "concat", 00:14:48.988 "superblock": false, 00:14:48.988 "num_base_bdevs": 4, 00:14:48.988 "num_base_bdevs_discovered": 2, 00:14:48.988 "num_base_bdevs_operational": 4, 00:14:48.988 "base_bdevs_list": [ 00:14:48.988 { 00:14:48.988 "name": "BaseBdev1", 00:14:48.988 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:48.988 "is_configured": true, 00:14:48.988 "data_offset": 0, 00:14:48.988 "data_size": 65536 00:14:48.988 }, 00:14:48.988 { 00:14:48.988 "name": null, 00:14:48.988 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:48.988 "is_configured": false, 00:14:48.988 "data_offset": 0, 00:14:48.988 "data_size": 65536 00:14:48.988 }, 00:14:48.988 { 00:14:48.988 "name": null, 00:14:48.988 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:48.988 "is_configured": false, 00:14:48.988 "data_offset": 0, 00:14:48.988 "data_size": 65536 00:14:48.988 }, 00:14:48.988 { 00:14:48.988 "name": "BaseBdev4", 00:14:48.988 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:48.988 "is_configured": true, 00:14:48.988 "data_offset": 0, 00:14:48.988 "data_size": 65536 00:14:48.988 } 00:14:48.988 ] 00:14:48.988 }' 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.988 11:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.557 [2024-11-20 11:26:57.286498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.557 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.558 "name": "Existed_Raid", 00:14:49.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.558 "strip_size_kb": 64, 00:14:49.558 "state": "configuring", 00:14:49.558 "raid_level": "concat", 00:14:49.558 "superblock": false, 00:14:49.558 "num_base_bdevs": 4, 00:14:49.558 "num_base_bdevs_discovered": 3, 00:14:49.558 "num_base_bdevs_operational": 4, 00:14:49.558 "base_bdevs_list": [ 00:14:49.558 { 00:14:49.558 "name": "BaseBdev1", 00:14:49.558 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:49.558 "is_configured": true, 00:14:49.558 "data_offset": 0, 00:14:49.558 "data_size": 65536 00:14:49.558 }, 00:14:49.558 { 00:14:49.558 "name": null, 00:14:49.558 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:49.558 "is_configured": false, 00:14:49.558 "data_offset": 0, 00:14:49.558 "data_size": 65536 00:14:49.558 }, 00:14:49.558 { 00:14:49.558 "name": "BaseBdev3", 00:14:49.558 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:49.558 "is_configured": true, 00:14:49.558 "data_offset": 0, 00:14:49.558 "data_size": 65536 00:14:49.558 }, 00:14:49.558 { 00:14:49.558 "name": "BaseBdev4", 00:14:49.558 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:49.558 "is_configured": true, 00:14:49.558 "data_offset": 0, 00:14:49.558 "data_size": 65536 00:14:49.558 } 00:14:49.558 ] 00:14:49.558 }' 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.558 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.127 [2024-11-20 11:26:57.854706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.127 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.386 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.386 "name": "Existed_Raid", 00:14:50.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.386 "strip_size_kb": 64, 00:14:50.386 "state": "configuring", 00:14:50.386 "raid_level": "concat", 00:14:50.386 "superblock": false, 00:14:50.386 "num_base_bdevs": 4, 00:14:50.386 "num_base_bdevs_discovered": 2, 00:14:50.386 "num_base_bdevs_operational": 4, 00:14:50.386 "base_bdevs_list": [ 00:14:50.386 { 00:14:50.386 "name": null, 00:14:50.386 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:50.386 "is_configured": false, 00:14:50.386 "data_offset": 0, 00:14:50.386 "data_size": 65536 00:14:50.386 }, 00:14:50.386 { 00:14:50.386 "name": null, 00:14:50.386 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:50.386 "is_configured": false, 00:14:50.386 "data_offset": 0, 00:14:50.386 "data_size": 65536 00:14:50.386 }, 00:14:50.386 { 00:14:50.386 "name": "BaseBdev3", 00:14:50.386 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:50.386 "is_configured": true, 00:14:50.386 "data_offset": 0, 00:14:50.386 "data_size": 65536 00:14:50.386 }, 00:14:50.386 { 00:14:50.386 "name": "BaseBdev4", 00:14:50.386 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:50.386 "is_configured": true, 00:14:50.386 "data_offset": 0, 00:14:50.386 "data_size": 65536 00:14:50.386 } 00:14:50.386 ] 00:14:50.386 }' 00:14:50.386 11:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.386 11:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.645 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.645 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.645 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.645 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.645 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.904 [2024-11-20 11:26:58.503601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.904 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.904 "name": "Existed_Raid", 00:14:50.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.904 "strip_size_kb": 64, 00:14:50.904 "state": "configuring", 00:14:50.904 "raid_level": "concat", 00:14:50.904 "superblock": false, 00:14:50.904 "num_base_bdevs": 4, 00:14:50.904 "num_base_bdevs_discovered": 3, 00:14:50.904 "num_base_bdevs_operational": 4, 00:14:50.904 "base_bdevs_list": [ 00:14:50.904 { 00:14:50.904 "name": null, 00:14:50.904 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:50.904 "is_configured": false, 00:14:50.904 "data_offset": 0, 00:14:50.904 "data_size": 65536 00:14:50.904 }, 00:14:50.904 { 00:14:50.904 "name": "BaseBdev2", 00:14:50.904 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:50.904 "is_configured": true, 00:14:50.904 "data_offset": 0, 00:14:50.904 "data_size": 65536 00:14:50.904 }, 00:14:50.904 { 00:14:50.904 "name": "BaseBdev3", 00:14:50.904 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:50.904 "is_configured": true, 00:14:50.904 "data_offset": 0, 00:14:50.904 "data_size": 65536 00:14:50.904 }, 00:14:50.904 { 00:14:50.904 "name": "BaseBdev4", 00:14:50.904 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:50.905 "is_configured": true, 00:14:50.905 "data_offset": 0, 00:14:50.905 "data_size": 65536 00:14:50.905 } 00:14:50.905 ] 00:14:50.905 }' 00:14:50.905 11:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.905 11:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c25f2bf5-0826-4033-be68-fe3a2a1b98b8 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.492 [2024-11-20 11:26:59.206457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:51.492 [2024-11-20 11:26:59.206544] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:51.492 [2024-11-20 11:26:59.206557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:51.492 [2024-11-20 11:26:59.206907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:51.492 [2024-11-20 11:26:59.207104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:51.492 [2024-11-20 11:26:59.207136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:51.492 [2024-11-20 11:26:59.207424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.492 NewBaseBdev 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.492 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.493 [ 00:14:51.493 { 00:14:51.493 "name": "NewBaseBdev", 00:14:51.493 "aliases": [ 00:14:51.493 "c25f2bf5-0826-4033-be68-fe3a2a1b98b8" 00:14:51.493 ], 00:14:51.493 "product_name": "Malloc disk", 00:14:51.493 "block_size": 512, 00:14:51.493 "num_blocks": 65536, 00:14:51.493 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:51.493 "assigned_rate_limits": { 00:14:51.493 "rw_ios_per_sec": 0, 00:14:51.493 "rw_mbytes_per_sec": 0, 00:14:51.493 "r_mbytes_per_sec": 0, 00:14:51.493 "w_mbytes_per_sec": 0 00:14:51.493 }, 00:14:51.493 "claimed": true, 00:14:51.493 "claim_type": "exclusive_write", 00:14:51.493 "zoned": false, 00:14:51.493 "supported_io_types": { 00:14:51.493 "read": true, 00:14:51.493 "write": true, 00:14:51.493 "unmap": true, 00:14:51.493 "flush": true, 00:14:51.493 "reset": true, 00:14:51.493 "nvme_admin": false, 00:14:51.493 "nvme_io": false, 00:14:51.493 "nvme_io_md": false, 00:14:51.493 "write_zeroes": true, 00:14:51.493 "zcopy": true, 00:14:51.493 "get_zone_info": false, 00:14:51.493 "zone_management": false, 00:14:51.493 "zone_append": false, 00:14:51.493 "compare": false, 00:14:51.493 "compare_and_write": false, 00:14:51.493 "abort": true, 00:14:51.493 "seek_hole": false, 00:14:51.493 "seek_data": false, 00:14:51.493 "copy": true, 00:14:51.493 "nvme_iov_md": false 00:14:51.493 }, 00:14:51.493 "memory_domains": [ 00:14:51.493 { 00:14:51.493 "dma_device_id": "system", 00:14:51.493 "dma_device_type": 1 00:14:51.493 }, 00:14:51.493 { 00:14:51.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.493 "dma_device_type": 2 00:14:51.493 } 00:14:51.493 ], 00:14:51.493 "driver_specific": {} 00:14:51.493 } 00:14:51.493 ] 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.493 "name": "Existed_Raid", 00:14:51.493 "uuid": "af07ec2b-61b9-42e6-b607-f55ea1f7bfec", 00:14:51.493 "strip_size_kb": 64, 00:14:51.493 "state": "online", 00:14:51.493 "raid_level": "concat", 00:14:51.493 "superblock": false, 00:14:51.493 "num_base_bdevs": 4, 00:14:51.493 "num_base_bdevs_discovered": 4, 00:14:51.493 "num_base_bdevs_operational": 4, 00:14:51.493 "base_bdevs_list": [ 00:14:51.493 { 00:14:51.493 "name": "NewBaseBdev", 00:14:51.493 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:51.493 "is_configured": true, 00:14:51.493 "data_offset": 0, 00:14:51.493 "data_size": 65536 00:14:51.493 }, 00:14:51.493 { 00:14:51.493 "name": "BaseBdev2", 00:14:51.493 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:51.493 "is_configured": true, 00:14:51.493 "data_offset": 0, 00:14:51.493 "data_size": 65536 00:14:51.493 }, 00:14:51.493 { 00:14:51.493 "name": "BaseBdev3", 00:14:51.493 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:51.493 "is_configured": true, 00:14:51.493 "data_offset": 0, 00:14:51.493 "data_size": 65536 00:14:51.493 }, 00:14:51.493 { 00:14:51.493 "name": "BaseBdev4", 00:14:51.493 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:51.493 "is_configured": true, 00:14:51.493 "data_offset": 0, 00:14:51.493 "data_size": 65536 00:14:51.493 } 00:14:51.493 ] 00:14:51.493 }' 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.493 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.091 [2024-11-20 11:26:59.739124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.091 "name": "Existed_Raid", 00:14:52.091 "aliases": [ 00:14:52.091 "af07ec2b-61b9-42e6-b607-f55ea1f7bfec" 00:14:52.091 ], 00:14:52.091 "product_name": "Raid Volume", 00:14:52.091 "block_size": 512, 00:14:52.091 "num_blocks": 262144, 00:14:52.091 "uuid": "af07ec2b-61b9-42e6-b607-f55ea1f7bfec", 00:14:52.091 "assigned_rate_limits": { 00:14:52.091 "rw_ios_per_sec": 0, 00:14:52.091 "rw_mbytes_per_sec": 0, 00:14:52.091 "r_mbytes_per_sec": 0, 00:14:52.091 "w_mbytes_per_sec": 0 00:14:52.091 }, 00:14:52.091 "claimed": false, 00:14:52.091 "zoned": false, 00:14:52.091 "supported_io_types": { 00:14:52.091 "read": true, 00:14:52.091 "write": true, 00:14:52.091 "unmap": true, 00:14:52.091 "flush": true, 00:14:52.091 "reset": true, 00:14:52.091 "nvme_admin": false, 00:14:52.091 "nvme_io": false, 00:14:52.091 "nvme_io_md": false, 00:14:52.091 "write_zeroes": true, 00:14:52.091 "zcopy": false, 00:14:52.091 "get_zone_info": false, 00:14:52.091 "zone_management": false, 00:14:52.091 "zone_append": false, 00:14:52.091 "compare": false, 00:14:52.091 "compare_and_write": false, 00:14:52.091 "abort": false, 00:14:52.091 "seek_hole": false, 00:14:52.091 "seek_data": false, 00:14:52.091 "copy": false, 00:14:52.091 "nvme_iov_md": false 00:14:52.091 }, 00:14:52.091 "memory_domains": [ 00:14:52.091 { 00:14:52.091 "dma_device_id": "system", 00:14:52.091 "dma_device_type": 1 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.091 "dma_device_type": 2 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "dma_device_id": "system", 00:14:52.091 "dma_device_type": 1 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.091 "dma_device_type": 2 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "dma_device_id": "system", 00:14:52.091 "dma_device_type": 1 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.091 "dma_device_type": 2 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "dma_device_id": "system", 00:14:52.091 "dma_device_type": 1 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.091 "dma_device_type": 2 00:14:52.091 } 00:14:52.091 ], 00:14:52.091 "driver_specific": { 00:14:52.091 "raid": { 00:14:52.091 "uuid": "af07ec2b-61b9-42e6-b607-f55ea1f7bfec", 00:14:52.091 "strip_size_kb": 64, 00:14:52.091 "state": "online", 00:14:52.091 "raid_level": "concat", 00:14:52.091 "superblock": false, 00:14:52.091 "num_base_bdevs": 4, 00:14:52.091 "num_base_bdevs_discovered": 4, 00:14:52.091 "num_base_bdevs_operational": 4, 00:14:52.091 "base_bdevs_list": [ 00:14:52.091 { 00:14:52.091 "name": "NewBaseBdev", 00:14:52.091 "uuid": "c25f2bf5-0826-4033-be68-fe3a2a1b98b8", 00:14:52.091 "is_configured": true, 00:14:52.091 "data_offset": 0, 00:14:52.091 "data_size": 65536 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "name": "BaseBdev2", 00:14:52.091 "uuid": "3ddf14b9-97dd-4854-bc5d-c91dfd5fca60", 00:14:52.091 "is_configured": true, 00:14:52.091 "data_offset": 0, 00:14:52.091 "data_size": 65536 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "name": "BaseBdev3", 00:14:52.091 "uuid": "3286202a-e945-48a6-9530-2c57cfce47eb", 00:14:52.091 "is_configured": true, 00:14:52.091 "data_offset": 0, 00:14:52.091 "data_size": 65536 00:14:52.091 }, 00:14:52.091 { 00:14:52.091 "name": "BaseBdev4", 00:14:52.091 "uuid": "b609025f-eb56-43f7-9a0f-e184c5db84d0", 00:14:52.091 "is_configured": true, 00:14:52.091 "data_offset": 0, 00:14:52.091 "data_size": 65536 00:14:52.091 } 00:14:52.091 ] 00:14:52.091 } 00:14:52.091 } 00:14:52.091 }' 00:14:52.091 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:52.092 BaseBdev2 00:14:52.092 BaseBdev3 00:14:52.092 BaseBdev4' 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.092 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.351 11:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.351 [2024-11-20 11:27:00.102779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.351 [2024-11-20 11:27:00.102817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.351 [2024-11-20 11:27:00.102927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.351 [2024-11-20 11:27:00.103019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.351 [2024-11-20 11:27:00.103036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71315 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71315 ']' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71315 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71315 00:14:52.351 killing process with pid 71315 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71315' 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71315 00:14:52.351 [2024-11-20 11:27:00.138390] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.351 11:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71315 00:14:52.918 [2024-11-20 11:27:00.498680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.855 ************************************ 00:14:53.855 END TEST raid_state_function_test 00:14:53.855 ************************************ 00:14:53.855 11:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:53.855 00:14:53.855 real 0m12.811s 00:14:53.855 user 0m21.354s 00:14:53.855 sys 0m1.718s 00:14:53.855 11:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.855 11:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.855 11:27:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:53.855 11:27:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:53.855 11:27:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.855 11:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.855 ************************************ 00:14:53.855 START TEST raid_state_function_test_sb 00:14:53.855 ************************************ 00:14:53.855 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:53.855 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:53.855 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:53.856 Process raid pid: 71997 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71997 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71997' 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71997 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71997 ']' 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.856 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 [2024-11-20 11:27:01.711035] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:14:54.116 [2024-11-20 11:27:01.711214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.116 [2024-11-20 11:27:01.894152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.374 [2024-11-20 11:27:02.024817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.634 [2024-11-20 11:27:02.237601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.634 [2024-11-20 11:27:02.237648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.892 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.892 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:54.892 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:54.892 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.892 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.892 [2024-11-20 11:27:02.659157] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.892 [2024-11-20 11:27:02.659221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.892 [2024-11-20 11:27:02.659237] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.892 [2024-11-20 11:27:02.659254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.893 [2024-11-20 11:27:02.659264] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.893 [2024-11-20 11:27:02.659279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.893 [2024-11-20 11:27:02.659288] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:54.893 [2024-11-20 11:27:02.659302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.893 "name": "Existed_Raid", 00:14:54.893 "uuid": "ca762a60-30a1-4a85-af9e-95037ba5396a", 00:14:54.893 "strip_size_kb": 64, 00:14:54.893 "state": "configuring", 00:14:54.893 "raid_level": "concat", 00:14:54.893 "superblock": true, 00:14:54.893 "num_base_bdevs": 4, 00:14:54.893 "num_base_bdevs_discovered": 0, 00:14:54.893 "num_base_bdevs_operational": 4, 00:14:54.893 "base_bdevs_list": [ 00:14:54.893 { 00:14:54.893 "name": "BaseBdev1", 00:14:54.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.893 "is_configured": false, 00:14:54.893 "data_offset": 0, 00:14:54.893 "data_size": 0 00:14:54.893 }, 00:14:54.893 { 00:14:54.893 "name": "BaseBdev2", 00:14:54.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.893 "is_configured": false, 00:14:54.893 "data_offset": 0, 00:14:54.893 "data_size": 0 00:14:54.893 }, 00:14:54.893 { 00:14:54.893 "name": "BaseBdev3", 00:14:54.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.893 "is_configured": false, 00:14:54.893 "data_offset": 0, 00:14:54.893 "data_size": 0 00:14:54.893 }, 00:14:54.893 { 00:14:54.893 "name": "BaseBdev4", 00:14:54.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.893 "is_configured": false, 00:14:54.893 "data_offset": 0, 00:14:54.893 "data_size": 0 00:14:54.893 } 00:14:54.893 ] 00:14:54.893 }' 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.893 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.460 [2024-11-20 11:27:03.171228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.460 [2024-11-20 11:27:03.171272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.460 [2024-11-20 11:27:03.179260] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.460 [2024-11-20 11:27:03.179311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.460 [2024-11-20 11:27:03.179327] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.460 [2024-11-20 11:27:03.179343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.460 [2024-11-20 11:27:03.179353] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.460 [2024-11-20 11:27:03.179367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.460 [2024-11-20 11:27:03.179376] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:55.460 [2024-11-20 11:27:03.179390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.460 [2024-11-20 11:27:03.226138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.460 BaseBdev1 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.460 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.460 [ 00:14:55.460 { 00:14:55.460 "name": "BaseBdev1", 00:14:55.460 "aliases": [ 00:14:55.460 "b757c507-79a4-4e3c-a249-8bc0625bdd52" 00:14:55.460 ], 00:14:55.461 "product_name": "Malloc disk", 00:14:55.461 "block_size": 512, 00:14:55.461 "num_blocks": 65536, 00:14:55.461 "uuid": "b757c507-79a4-4e3c-a249-8bc0625bdd52", 00:14:55.461 "assigned_rate_limits": { 00:14:55.461 "rw_ios_per_sec": 0, 00:14:55.461 "rw_mbytes_per_sec": 0, 00:14:55.461 "r_mbytes_per_sec": 0, 00:14:55.461 "w_mbytes_per_sec": 0 00:14:55.461 }, 00:14:55.461 "claimed": true, 00:14:55.461 "claim_type": "exclusive_write", 00:14:55.461 "zoned": false, 00:14:55.461 "supported_io_types": { 00:14:55.461 "read": true, 00:14:55.461 "write": true, 00:14:55.461 "unmap": true, 00:14:55.461 "flush": true, 00:14:55.461 "reset": true, 00:14:55.461 "nvme_admin": false, 00:14:55.461 "nvme_io": false, 00:14:55.461 "nvme_io_md": false, 00:14:55.461 "write_zeroes": true, 00:14:55.461 "zcopy": true, 00:14:55.461 "get_zone_info": false, 00:14:55.461 "zone_management": false, 00:14:55.461 "zone_append": false, 00:14:55.461 "compare": false, 00:14:55.461 "compare_and_write": false, 00:14:55.461 "abort": true, 00:14:55.461 "seek_hole": false, 00:14:55.461 "seek_data": false, 00:14:55.461 "copy": true, 00:14:55.461 "nvme_iov_md": false 00:14:55.461 }, 00:14:55.461 "memory_domains": [ 00:14:55.461 { 00:14:55.461 "dma_device_id": "system", 00:14:55.461 "dma_device_type": 1 00:14:55.461 }, 00:14:55.461 { 00:14:55.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.461 "dma_device_type": 2 00:14:55.461 } 00:14:55.461 ], 00:14:55.461 "driver_specific": {} 00:14:55.461 } 00:14:55.461 ] 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.461 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.774 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.774 "name": "Existed_Raid", 00:14:55.774 "uuid": "b65f52cb-c440-4603-9ac6-3815d7e2843d", 00:14:55.774 "strip_size_kb": 64, 00:14:55.774 "state": "configuring", 00:14:55.774 "raid_level": "concat", 00:14:55.774 "superblock": true, 00:14:55.774 "num_base_bdevs": 4, 00:14:55.774 "num_base_bdevs_discovered": 1, 00:14:55.774 "num_base_bdevs_operational": 4, 00:14:55.774 "base_bdevs_list": [ 00:14:55.774 { 00:14:55.774 "name": "BaseBdev1", 00:14:55.774 "uuid": "b757c507-79a4-4e3c-a249-8bc0625bdd52", 00:14:55.774 "is_configured": true, 00:14:55.774 "data_offset": 2048, 00:14:55.774 "data_size": 63488 00:14:55.774 }, 00:14:55.774 { 00:14:55.774 "name": "BaseBdev2", 00:14:55.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.774 "is_configured": false, 00:14:55.774 "data_offset": 0, 00:14:55.774 "data_size": 0 00:14:55.774 }, 00:14:55.774 { 00:14:55.774 "name": "BaseBdev3", 00:14:55.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.774 "is_configured": false, 00:14:55.774 "data_offset": 0, 00:14:55.774 "data_size": 0 00:14:55.774 }, 00:14:55.774 { 00:14:55.774 "name": "BaseBdev4", 00:14:55.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.774 "is_configured": false, 00:14:55.774 "data_offset": 0, 00:14:55.774 "data_size": 0 00:14:55.774 } 00:14:55.774 ] 00:14:55.774 }' 00:14:55.774 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.774 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.042 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.042 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 [2024-11-20 11:27:03.774325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.042 [2024-11-20 11:27:03.774400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:56.042 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.042 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.042 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.042 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.042 [2024-11-20 11:27:03.782425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.043 [2024-11-20 11:27:03.784875] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.043 [2024-11-20 11:27:03.785059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.043 [2024-11-20 11:27:03.785087] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.043 [2024-11-20 11:27:03.785106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.043 [2024-11-20 11:27:03.785117] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:56.043 [2024-11-20 11:27:03.785130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.043 "name": "Existed_Raid", 00:14:56.043 "uuid": "5df862ac-e312-44ae-bb9c-5b5ac08e5423", 00:14:56.043 "strip_size_kb": 64, 00:14:56.043 "state": "configuring", 00:14:56.043 "raid_level": "concat", 00:14:56.043 "superblock": true, 00:14:56.043 "num_base_bdevs": 4, 00:14:56.043 "num_base_bdevs_discovered": 1, 00:14:56.043 "num_base_bdevs_operational": 4, 00:14:56.043 "base_bdevs_list": [ 00:14:56.043 { 00:14:56.043 "name": "BaseBdev1", 00:14:56.043 "uuid": "b757c507-79a4-4e3c-a249-8bc0625bdd52", 00:14:56.043 "is_configured": true, 00:14:56.043 "data_offset": 2048, 00:14:56.043 "data_size": 63488 00:14:56.043 }, 00:14:56.043 { 00:14:56.043 "name": "BaseBdev2", 00:14:56.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.043 "is_configured": false, 00:14:56.043 "data_offset": 0, 00:14:56.043 "data_size": 0 00:14:56.043 }, 00:14:56.043 { 00:14:56.043 "name": "BaseBdev3", 00:14:56.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.043 "is_configured": false, 00:14:56.043 "data_offset": 0, 00:14:56.043 "data_size": 0 00:14:56.043 }, 00:14:56.043 { 00:14:56.043 "name": "BaseBdev4", 00:14:56.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.043 "is_configured": false, 00:14:56.043 "data_offset": 0, 00:14:56.043 "data_size": 0 00:14:56.043 } 00:14:56.043 ] 00:14:56.043 }' 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.043 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.611 [2024-11-20 11:27:04.348449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.611 BaseBdev2 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.611 [ 00:14:56.611 { 00:14:56.611 "name": "BaseBdev2", 00:14:56.611 "aliases": [ 00:14:56.611 "95ab62a7-9b34-4405-b293-513f0dbedb05" 00:14:56.611 ], 00:14:56.611 "product_name": "Malloc disk", 00:14:56.611 "block_size": 512, 00:14:56.611 "num_blocks": 65536, 00:14:56.611 "uuid": "95ab62a7-9b34-4405-b293-513f0dbedb05", 00:14:56.611 "assigned_rate_limits": { 00:14:56.611 "rw_ios_per_sec": 0, 00:14:56.611 "rw_mbytes_per_sec": 0, 00:14:56.611 "r_mbytes_per_sec": 0, 00:14:56.611 "w_mbytes_per_sec": 0 00:14:56.611 }, 00:14:56.611 "claimed": true, 00:14:56.611 "claim_type": "exclusive_write", 00:14:56.611 "zoned": false, 00:14:56.611 "supported_io_types": { 00:14:56.611 "read": true, 00:14:56.611 "write": true, 00:14:56.611 "unmap": true, 00:14:56.611 "flush": true, 00:14:56.611 "reset": true, 00:14:56.611 "nvme_admin": false, 00:14:56.611 "nvme_io": false, 00:14:56.611 "nvme_io_md": false, 00:14:56.611 "write_zeroes": true, 00:14:56.611 "zcopy": true, 00:14:56.611 "get_zone_info": false, 00:14:56.611 "zone_management": false, 00:14:56.611 "zone_append": false, 00:14:56.611 "compare": false, 00:14:56.611 "compare_and_write": false, 00:14:56.611 "abort": true, 00:14:56.611 "seek_hole": false, 00:14:56.611 "seek_data": false, 00:14:56.611 "copy": true, 00:14:56.611 "nvme_iov_md": false 00:14:56.611 }, 00:14:56.611 "memory_domains": [ 00:14:56.611 { 00:14:56.611 "dma_device_id": "system", 00:14:56.611 "dma_device_type": 1 00:14:56.611 }, 00:14:56.611 { 00:14:56.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.611 "dma_device_type": 2 00:14:56.611 } 00:14:56.611 ], 00:14:56.611 "driver_specific": {} 00:14:56.611 } 00:14:56.611 ] 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.611 "name": "Existed_Raid", 00:14:56.611 "uuid": "5df862ac-e312-44ae-bb9c-5b5ac08e5423", 00:14:56.611 "strip_size_kb": 64, 00:14:56.611 "state": "configuring", 00:14:56.611 "raid_level": "concat", 00:14:56.611 "superblock": true, 00:14:56.611 "num_base_bdevs": 4, 00:14:56.611 "num_base_bdevs_discovered": 2, 00:14:56.611 "num_base_bdevs_operational": 4, 00:14:56.611 "base_bdevs_list": [ 00:14:56.611 { 00:14:56.611 "name": "BaseBdev1", 00:14:56.611 "uuid": "b757c507-79a4-4e3c-a249-8bc0625bdd52", 00:14:56.611 "is_configured": true, 00:14:56.611 "data_offset": 2048, 00:14:56.611 "data_size": 63488 00:14:56.611 }, 00:14:56.611 { 00:14:56.611 "name": "BaseBdev2", 00:14:56.611 "uuid": "95ab62a7-9b34-4405-b293-513f0dbedb05", 00:14:56.611 "is_configured": true, 00:14:56.611 "data_offset": 2048, 00:14:56.611 "data_size": 63488 00:14:56.611 }, 00:14:56.611 { 00:14:56.611 "name": "BaseBdev3", 00:14:56.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.611 "is_configured": false, 00:14:56.611 "data_offset": 0, 00:14:56.611 "data_size": 0 00:14:56.611 }, 00:14:56.611 { 00:14:56.611 "name": "BaseBdev4", 00:14:56.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.611 "is_configured": false, 00:14:56.611 "data_offset": 0, 00:14:56.611 "data_size": 0 00:14:56.611 } 00:14:56.611 ] 00:14:56.611 }' 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.611 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 [2024-11-20 11:27:04.935781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.180 BaseBdev3 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.180 [ 00:14:57.180 { 00:14:57.180 "name": "BaseBdev3", 00:14:57.180 "aliases": [ 00:14:57.180 "f8b001c6-d068-4300-8013-7f5c52edfaec" 00:14:57.180 ], 00:14:57.180 "product_name": "Malloc disk", 00:14:57.180 "block_size": 512, 00:14:57.180 "num_blocks": 65536, 00:14:57.180 "uuid": "f8b001c6-d068-4300-8013-7f5c52edfaec", 00:14:57.180 "assigned_rate_limits": { 00:14:57.180 "rw_ios_per_sec": 0, 00:14:57.180 "rw_mbytes_per_sec": 0, 00:14:57.180 "r_mbytes_per_sec": 0, 00:14:57.180 "w_mbytes_per_sec": 0 00:14:57.180 }, 00:14:57.180 "claimed": true, 00:14:57.180 "claim_type": "exclusive_write", 00:14:57.180 "zoned": false, 00:14:57.180 "supported_io_types": { 00:14:57.180 "read": true, 00:14:57.180 "write": true, 00:14:57.180 "unmap": true, 00:14:57.180 "flush": true, 00:14:57.180 "reset": true, 00:14:57.180 "nvme_admin": false, 00:14:57.180 "nvme_io": false, 00:14:57.180 "nvme_io_md": false, 00:14:57.180 "write_zeroes": true, 00:14:57.180 "zcopy": true, 00:14:57.180 "get_zone_info": false, 00:14:57.180 "zone_management": false, 00:14:57.180 "zone_append": false, 00:14:57.180 "compare": false, 00:14:57.180 "compare_and_write": false, 00:14:57.180 "abort": true, 00:14:57.180 "seek_hole": false, 00:14:57.180 "seek_data": false, 00:14:57.180 "copy": true, 00:14:57.180 "nvme_iov_md": false 00:14:57.180 }, 00:14:57.180 "memory_domains": [ 00:14:57.180 { 00:14:57.180 "dma_device_id": "system", 00:14:57.180 "dma_device_type": 1 00:14:57.180 }, 00:14:57.180 { 00:14:57.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.180 "dma_device_type": 2 00:14:57.180 } 00:14:57.180 ], 00:14:57.180 "driver_specific": {} 00:14:57.180 } 00:14:57.180 ] 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.180 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.181 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.181 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.181 "name": "Existed_Raid", 00:14:57.181 "uuid": "5df862ac-e312-44ae-bb9c-5b5ac08e5423", 00:14:57.181 "strip_size_kb": 64, 00:14:57.181 "state": "configuring", 00:14:57.181 "raid_level": "concat", 00:14:57.181 "superblock": true, 00:14:57.181 "num_base_bdevs": 4, 00:14:57.181 "num_base_bdevs_discovered": 3, 00:14:57.181 "num_base_bdevs_operational": 4, 00:14:57.181 "base_bdevs_list": [ 00:14:57.181 { 00:14:57.181 "name": "BaseBdev1", 00:14:57.181 "uuid": "b757c507-79a4-4e3c-a249-8bc0625bdd52", 00:14:57.181 "is_configured": true, 00:14:57.181 "data_offset": 2048, 00:14:57.181 "data_size": 63488 00:14:57.181 }, 00:14:57.181 { 00:14:57.181 "name": "BaseBdev2", 00:14:57.181 "uuid": "95ab62a7-9b34-4405-b293-513f0dbedb05", 00:14:57.181 "is_configured": true, 00:14:57.181 "data_offset": 2048, 00:14:57.181 "data_size": 63488 00:14:57.181 }, 00:14:57.181 { 00:14:57.181 "name": "BaseBdev3", 00:14:57.181 "uuid": "f8b001c6-d068-4300-8013-7f5c52edfaec", 00:14:57.181 "is_configured": true, 00:14:57.181 "data_offset": 2048, 00:14:57.181 "data_size": 63488 00:14:57.181 }, 00:14:57.181 { 00:14:57.181 "name": "BaseBdev4", 00:14:57.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.181 "is_configured": false, 00:14:57.181 "data_offset": 0, 00:14:57.181 "data_size": 0 00:14:57.181 } 00:14:57.181 ] 00:14:57.181 }' 00:14:57.439 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.439 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.696 [2024-11-20 11:27:05.522244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.696 [2024-11-20 11:27:05.522571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.696 [2024-11-20 11:27:05.522592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:57.696 BaseBdev4 00:14:57.696 [2024-11-20 11:27:05.522958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:57.696 [2024-11-20 11:27:05.523163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.696 [2024-11-20 11:27:05.523184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:57.696 [2024-11-20 11:27:05.523358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.696 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.955 [ 00:14:57.955 { 00:14:57.955 "name": "BaseBdev4", 00:14:57.955 "aliases": [ 00:14:57.955 "f8292437-7ad6-4792-9637-d1bb502719cd" 00:14:57.955 ], 00:14:57.955 "product_name": "Malloc disk", 00:14:57.955 "block_size": 512, 00:14:57.955 "num_blocks": 65536, 00:14:57.955 "uuid": "f8292437-7ad6-4792-9637-d1bb502719cd", 00:14:57.955 "assigned_rate_limits": { 00:14:57.955 "rw_ios_per_sec": 0, 00:14:57.955 "rw_mbytes_per_sec": 0, 00:14:57.955 "r_mbytes_per_sec": 0, 00:14:57.955 "w_mbytes_per_sec": 0 00:14:57.955 }, 00:14:57.955 "claimed": true, 00:14:57.955 "claim_type": "exclusive_write", 00:14:57.955 "zoned": false, 00:14:57.955 "supported_io_types": { 00:14:57.955 "read": true, 00:14:57.955 "write": true, 00:14:57.955 "unmap": true, 00:14:57.955 "flush": true, 00:14:57.955 "reset": true, 00:14:57.955 "nvme_admin": false, 00:14:57.955 "nvme_io": false, 00:14:57.955 "nvme_io_md": false, 00:14:57.955 "write_zeroes": true, 00:14:57.955 "zcopy": true, 00:14:57.955 "get_zone_info": false, 00:14:57.955 "zone_management": false, 00:14:57.955 "zone_append": false, 00:14:57.955 "compare": false, 00:14:57.955 "compare_and_write": false, 00:14:57.955 "abort": true, 00:14:57.955 "seek_hole": false, 00:14:57.955 "seek_data": false, 00:14:57.955 "copy": true, 00:14:57.955 "nvme_iov_md": false 00:14:57.955 }, 00:14:57.955 "memory_domains": [ 00:14:57.955 { 00:14:57.955 "dma_device_id": "system", 00:14:57.955 "dma_device_type": 1 00:14:57.955 }, 00:14:57.955 { 00:14:57.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.955 "dma_device_type": 2 00:14:57.955 } 00:14:57.955 ], 00:14:57.955 "driver_specific": {} 00:14:57.955 } 00:14:57.955 ] 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.956 "name": "Existed_Raid", 00:14:57.956 "uuid": "5df862ac-e312-44ae-bb9c-5b5ac08e5423", 00:14:57.956 "strip_size_kb": 64, 00:14:57.956 "state": "online", 00:14:57.956 "raid_level": "concat", 00:14:57.956 "superblock": true, 00:14:57.956 "num_base_bdevs": 4, 00:14:57.956 "num_base_bdevs_discovered": 4, 00:14:57.956 "num_base_bdevs_operational": 4, 00:14:57.956 "base_bdevs_list": [ 00:14:57.956 { 00:14:57.956 "name": "BaseBdev1", 00:14:57.956 "uuid": "b757c507-79a4-4e3c-a249-8bc0625bdd52", 00:14:57.956 "is_configured": true, 00:14:57.956 "data_offset": 2048, 00:14:57.956 "data_size": 63488 00:14:57.956 }, 00:14:57.956 { 00:14:57.956 "name": "BaseBdev2", 00:14:57.956 "uuid": "95ab62a7-9b34-4405-b293-513f0dbedb05", 00:14:57.956 "is_configured": true, 00:14:57.956 "data_offset": 2048, 00:14:57.956 "data_size": 63488 00:14:57.956 }, 00:14:57.956 { 00:14:57.956 "name": "BaseBdev3", 00:14:57.956 "uuid": "f8b001c6-d068-4300-8013-7f5c52edfaec", 00:14:57.956 "is_configured": true, 00:14:57.956 "data_offset": 2048, 00:14:57.956 "data_size": 63488 00:14:57.956 }, 00:14:57.956 { 00:14:57.956 "name": "BaseBdev4", 00:14:57.956 "uuid": "f8292437-7ad6-4792-9637-d1bb502719cd", 00:14:57.956 "is_configured": true, 00:14:57.956 "data_offset": 2048, 00:14:57.956 "data_size": 63488 00:14:57.956 } 00:14:57.956 ] 00:14:57.956 }' 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.956 11:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.522 [2024-11-20 11:27:06.070895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.522 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.522 "name": "Existed_Raid", 00:14:58.522 "aliases": [ 00:14:58.523 "5df862ac-e312-44ae-bb9c-5b5ac08e5423" 00:14:58.523 ], 00:14:58.523 "product_name": "Raid Volume", 00:14:58.523 "block_size": 512, 00:14:58.523 "num_blocks": 253952, 00:14:58.523 "uuid": "5df862ac-e312-44ae-bb9c-5b5ac08e5423", 00:14:58.523 "assigned_rate_limits": { 00:14:58.523 "rw_ios_per_sec": 0, 00:14:58.523 "rw_mbytes_per_sec": 0, 00:14:58.523 "r_mbytes_per_sec": 0, 00:14:58.523 "w_mbytes_per_sec": 0 00:14:58.523 }, 00:14:58.523 "claimed": false, 00:14:58.523 "zoned": false, 00:14:58.523 "supported_io_types": { 00:14:58.523 "read": true, 00:14:58.523 "write": true, 00:14:58.523 "unmap": true, 00:14:58.523 "flush": true, 00:14:58.523 "reset": true, 00:14:58.523 "nvme_admin": false, 00:14:58.523 "nvme_io": false, 00:14:58.523 "nvme_io_md": false, 00:14:58.523 "write_zeroes": true, 00:14:58.523 "zcopy": false, 00:14:58.523 "get_zone_info": false, 00:14:58.523 "zone_management": false, 00:14:58.523 "zone_append": false, 00:14:58.523 "compare": false, 00:14:58.523 "compare_and_write": false, 00:14:58.523 "abort": false, 00:14:58.523 "seek_hole": false, 00:14:58.523 "seek_data": false, 00:14:58.523 "copy": false, 00:14:58.523 "nvme_iov_md": false 00:14:58.523 }, 00:14:58.523 "memory_domains": [ 00:14:58.523 { 00:14:58.523 "dma_device_id": "system", 00:14:58.523 "dma_device_type": 1 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.523 "dma_device_type": 2 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "dma_device_id": "system", 00:14:58.523 "dma_device_type": 1 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.523 "dma_device_type": 2 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "dma_device_id": "system", 00:14:58.523 "dma_device_type": 1 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.523 "dma_device_type": 2 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "dma_device_id": "system", 00:14:58.523 "dma_device_type": 1 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.523 "dma_device_type": 2 00:14:58.523 } 00:14:58.523 ], 00:14:58.523 "driver_specific": { 00:14:58.523 "raid": { 00:14:58.523 "uuid": "5df862ac-e312-44ae-bb9c-5b5ac08e5423", 00:14:58.523 "strip_size_kb": 64, 00:14:58.523 "state": "online", 00:14:58.523 "raid_level": "concat", 00:14:58.523 "superblock": true, 00:14:58.523 "num_base_bdevs": 4, 00:14:58.523 "num_base_bdevs_discovered": 4, 00:14:58.523 "num_base_bdevs_operational": 4, 00:14:58.523 "base_bdevs_list": [ 00:14:58.523 { 00:14:58.523 "name": "BaseBdev1", 00:14:58.523 "uuid": "b757c507-79a4-4e3c-a249-8bc0625bdd52", 00:14:58.523 "is_configured": true, 00:14:58.523 "data_offset": 2048, 00:14:58.523 "data_size": 63488 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "name": "BaseBdev2", 00:14:58.523 "uuid": "95ab62a7-9b34-4405-b293-513f0dbedb05", 00:14:58.523 "is_configured": true, 00:14:58.523 "data_offset": 2048, 00:14:58.523 "data_size": 63488 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "name": "BaseBdev3", 00:14:58.523 "uuid": "f8b001c6-d068-4300-8013-7f5c52edfaec", 00:14:58.523 "is_configured": true, 00:14:58.523 "data_offset": 2048, 00:14:58.523 "data_size": 63488 00:14:58.523 }, 00:14:58.523 { 00:14:58.523 "name": "BaseBdev4", 00:14:58.523 "uuid": "f8292437-7ad6-4792-9637-d1bb502719cd", 00:14:58.523 "is_configured": true, 00:14:58.523 "data_offset": 2048, 00:14:58.523 "data_size": 63488 00:14:58.523 } 00:14:58.523 ] 00:14:58.523 } 00:14:58.523 } 00:14:58.523 }' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:58.523 BaseBdev2 00:14:58.523 BaseBdev3 00:14:58.523 BaseBdev4' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.782 [2024-11-20 11:27:06.442647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.782 [2024-11-20 11:27:06.442684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.782 [2024-11-20 11:27:06.442750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.782 "name": "Existed_Raid", 00:14:58.782 "uuid": "5df862ac-e312-44ae-bb9c-5b5ac08e5423", 00:14:58.782 "strip_size_kb": 64, 00:14:58.782 "state": "offline", 00:14:58.782 "raid_level": "concat", 00:14:58.782 "superblock": true, 00:14:58.782 "num_base_bdevs": 4, 00:14:58.782 "num_base_bdevs_discovered": 3, 00:14:58.782 "num_base_bdevs_operational": 3, 00:14:58.782 "base_bdevs_list": [ 00:14:58.782 { 00:14:58.782 "name": null, 00:14:58.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.782 "is_configured": false, 00:14:58.782 "data_offset": 0, 00:14:58.782 "data_size": 63488 00:14:58.782 }, 00:14:58.782 { 00:14:58.782 "name": "BaseBdev2", 00:14:58.782 "uuid": "95ab62a7-9b34-4405-b293-513f0dbedb05", 00:14:58.782 "is_configured": true, 00:14:58.782 "data_offset": 2048, 00:14:58.782 "data_size": 63488 00:14:58.782 }, 00:14:58.782 { 00:14:58.782 "name": "BaseBdev3", 00:14:58.782 "uuid": "f8b001c6-d068-4300-8013-7f5c52edfaec", 00:14:58.782 "is_configured": true, 00:14:58.782 "data_offset": 2048, 00:14:58.782 "data_size": 63488 00:14:58.782 }, 00:14:58.782 { 00:14:58.782 "name": "BaseBdev4", 00:14:58.782 "uuid": "f8292437-7ad6-4792-9637-d1bb502719cd", 00:14:58.782 "is_configured": true, 00:14:58.782 "data_offset": 2048, 00:14:58.782 "data_size": 63488 00:14:58.782 } 00:14:58.782 ] 00:14:58.782 }' 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.782 11:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.348 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.348 [2024-11-20 11:27:07.115999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.607 [2024-11-20 11:27:07.261498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.607 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.607 [2024-11-20 11:27:07.406790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:59.608 [2024-11-20 11:27:07.406863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.876 BaseBdev2 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.876 [ 00:14:59.876 { 00:14:59.876 "name": "BaseBdev2", 00:14:59.876 "aliases": [ 00:14:59.876 "4c6a937b-8e33-441d-b56e-5df6a05057a0" 00:14:59.876 ], 00:14:59.876 "product_name": "Malloc disk", 00:14:59.876 "block_size": 512, 00:14:59.876 "num_blocks": 65536, 00:14:59.876 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:14:59.876 "assigned_rate_limits": { 00:14:59.876 "rw_ios_per_sec": 0, 00:14:59.876 "rw_mbytes_per_sec": 0, 00:14:59.876 "r_mbytes_per_sec": 0, 00:14:59.876 "w_mbytes_per_sec": 0 00:14:59.876 }, 00:14:59.876 "claimed": false, 00:14:59.876 "zoned": false, 00:14:59.876 "supported_io_types": { 00:14:59.876 "read": true, 00:14:59.876 "write": true, 00:14:59.876 "unmap": true, 00:14:59.876 "flush": true, 00:14:59.876 "reset": true, 00:14:59.876 "nvme_admin": false, 00:14:59.876 "nvme_io": false, 00:14:59.876 "nvme_io_md": false, 00:14:59.876 "write_zeroes": true, 00:14:59.876 "zcopy": true, 00:14:59.876 "get_zone_info": false, 00:14:59.876 "zone_management": false, 00:14:59.876 "zone_append": false, 00:14:59.876 "compare": false, 00:14:59.876 "compare_and_write": false, 00:14:59.876 "abort": true, 00:14:59.876 "seek_hole": false, 00:14:59.876 "seek_data": false, 00:14:59.876 "copy": true, 00:14:59.876 "nvme_iov_md": false 00:14:59.876 }, 00:14:59.876 "memory_domains": [ 00:14:59.876 { 00:14:59.876 "dma_device_id": "system", 00:14:59.876 "dma_device_type": 1 00:14:59.876 }, 00:14:59.876 { 00:14:59.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.876 "dma_device_type": 2 00:14:59.876 } 00:14:59.876 ], 00:14:59.876 "driver_specific": {} 00:14:59.876 } 00:14:59.876 ] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.876 BaseBdev3 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.876 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.876 [ 00:14:59.876 { 00:14:59.876 "name": "BaseBdev3", 00:14:59.876 "aliases": [ 00:14:59.876 "4d065ba8-315c-42d9-918a-46d4ad07b689" 00:14:59.876 ], 00:14:59.876 "product_name": "Malloc disk", 00:14:59.877 "block_size": 512, 00:14:59.877 "num_blocks": 65536, 00:14:59.877 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:14:59.877 "assigned_rate_limits": { 00:14:59.877 "rw_ios_per_sec": 0, 00:14:59.877 "rw_mbytes_per_sec": 0, 00:14:59.877 "r_mbytes_per_sec": 0, 00:14:59.877 "w_mbytes_per_sec": 0 00:14:59.877 }, 00:14:59.877 "claimed": false, 00:14:59.877 "zoned": false, 00:14:59.877 "supported_io_types": { 00:14:59.877 "read": true, 00:14:59.877 "write": true, 00:14:59.877 "unmap": true, 00:14:59.877 "flush": true, 00:14:59.877 "reset": true, 00:14:59.877 "nvme_admin": false, 00:14:59.877 "nvme_io": false, 00:14:59.877 "nvme_io_md": false, 00:14:59.877 "write_zeroes": true, 00:14:59.877 "zcopy": true, 00:14:59.877 "get_zone_info": false, 00:14:59.877 "zone_management": false, 00:14:59.877 "zone_append": false, 00:14:59.877 "compare": false, 00:14:59.877 "compare_and_write": false, 00:14:59.877 "abort": true, 00:14:59.877 "seek_hole": false, 00:14:59.877 "seek_data": false, 00:14:59.877 "copy": true, 00:14:59.877 "nvme_iov_md": false 00:14:59.877 }, 00:14:59.877 "memory_domains": [ 00:14:59.877 { 00:14:59.877 "dma_device_id": "system", 00:14:59.877 "dma_device_type": 1 00:14:59.877 }, 00:14:59.877 { 00:14:59.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.877 "dma_device_type": 2 00:14:59.877 } 00:14:59.877 ], 00:14:59.877 "driver_specific": {} 00:14:59.877 } 00:14:59.877 ] 00:14:59.877 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.877 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:59.877 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.877 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.877 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:59.877 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.877 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.136 BaseBdev4 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.136 [ 00:15:00.136 { 00:15:00.136 "name": "BaseBdev4", 00:15:00.136 "aliases": [ 00:15:00.136 "1304082a-e2a5-4a6a-aa7e-40e9868f59ef" 00:15:00.136 ], 00:15:00.136 "product_name": "Malloc disk", 00:15:00.136 "block_size": 512, 00:15:00.136 "num_blocks": 65536, 00:15:00.136 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:00.136 "assigned_rate_limits": { 00:15:00.136 "rw_ios_per_sec": 0, 00:15:00.136 "rw_mbytes_per_sec": 0, 00:15:00.136 "r_mbytes_per_sec": 0, 00:15:00.136 "w_mbytes_per_sec": 0 00:15:00.136 }, 00:15:00.136 "claimed": false, 00:15:00.136 "zoned": false, 00:15:00.136 "supported_io_types": { 00:15:00.136 "read": true, 00:15:00.136 "write": true, 00:15:00.136 "unmap": true, 00:15:00.136 "flush": true, 00:15:00.136 "reset": true, 00:15:00.136 "nvme_admin": false, 00:15:00.136 "nvme_io": false, 00:15:00.136 "nvme_io_md": false, 00:15:00.136 "write_zeroes": true, 00:15:00.136 "zcopy": true, 00:15:00.136 "get_zone_info": false, 00:15:00.136 "zone_management": false, 00:15:00.136 "zone_append": false, 00:15:00.136 "compare": false, 00:15:00.136 "compare_and_write": false, 00:15:00.136 "abort": true, 00:15:00.136 "seek_hole": false, 00:15:00.136 "seek_data": false, 00:15:00.136 "copy": true, 00:15:00.136 "nvme_iov_md": false 00:15:00.136 }, 00:15:00.136 "memory_domains": [ 00:15:00.136 { 00:15:00.136 "dma_device_id": "system", 00:15:00.136 "dma_device_type": 1 00:15:00.136 }, 00:15:00.136 { 00:15:00.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.136 "dma_device_type": 2 00:15:00.136 } 00:15:00.136 ], 00:15:00.136 "driver_specific": {} 00:15:00.136 } 00:15:00.136 ] 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.136 [2024-11-20 11:27:07.778234] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.136 [2024-11-20 11:27:07.778413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.136 [2024-11-20 11:27:07.778463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.136 [2024-11-20 11:27:07.780911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.136 [2024-11-20 11:27:07.780986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.136 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.137 "name": "Existed_Raid", 00:15:00.137 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:00.137 "strip_size_kb": 64, 00:15:00.137 "state": "configuring", 00:15:00.137 "raid_level": "concat", 00:15:00.137 "superblock": true, 00:15:00.137 "num_base_bdevs": 4, 00:15:00.137 "num_base_bdevs_discovered": 3, 00:15:00.137 "num_base_bdevs_operational": 4, 00:15:00.137 "base_bdevs_list": [ 00:15:00.137 { 00:15:00.137 "name": "BaseBdev1", 00:15:00.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.137 "is_configured": false, 00:15:00.137 "data_offset": 0, 00:15:00.137 "data_size": 0 00:15:00.137 }, 00:15:00.137 { 00:15:00.137 "name": "BaseBdev2", 00:15:00.137 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:00.137 "is_configured": true, 00:15:00.137 "data_offset": 2048, 00:15:00.137 "data_size": 63488 00:15:00.137 }, 00:15:00.137 { 00:15:00.137 "name": "BaseBdev3", 00:15:00.137 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:00.137 "is_configured": true, 00:15:00.137 "data_offset": 2048, 00:15:00.137 "data_size": 63488 00:15:00.137 }, 00:15:00.137 { 00:15:00.137 "name": "BaseBdev4", 00:15:00.137 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:00.137 "is_configured": true, 00:15:00.137 "data_offset": 2048, 00:15:00.137 "data_size": 63488 00:15:00.137 } 00:15:00.137 ] 00:15:00.137 }' 00:15:00.137 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.137 11:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.706 [2024-11-20 11:27:08.258345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.706 "name": "Existed_Raid", 00:15:00.706 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:00.706 "strip_size_kb": 64, 00:15:00.706 "state": "configuring", 00:15:00.706 "raid_level": "concat", 00:15:00.706 "superblock": true, 00:15:00.706 "num_base_bdevs": 4, 00:15:00.706 "num_base_bdevs_discovered": 2, 00:15:00.706 "num_base_bdevs_operational": 4, 00:15:00.706 "base_bdevs_list": [ 00:15:00.706 { 00:15:00.706 "name": "BaseBdev1", 00:15:00.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.706 "is_configured": false, 00:15:00.706 "data_offset": 0, 00:15:00.706 "data_size": 0 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "name": null, 00:15:00.706 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:00.706 "is_configured": false, 00:15:00.706 "data_offset": 0, 00:15:00.706 "data_size": 63488 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "name": "BaseBdev3", 00:15:00.706 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:00.706 "is_configured": true, 00:15:00.706 "data_offset": 2048, 00:15:00.706 "data_size": 63488 00:15:00.706 }, 00:15:00.706 { 00:15:00.706 "name": "BaseBdev4", 00:15:00.706 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:00.706 "is_configured": true, 00:15:00.706 "data_offset": 2048, 00:15:00.706 "data_size": 63488 00:15:00.706 } 00:15:00.706 ] 00:15:00.706 }' 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.706 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.965 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.224 [2024-11-20 11:27:08.832023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.224 BaseBdev1 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.224 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.224 [ 00:15:01.224 { 00:15:01.224 "name": "BaseBdev1", 00:15:01.224 "aliases": [ 00:15:01.224 "b895fd50-b3d7-40f5-b549-35ff60c56332" 00:15:01.224 ], 00:15:01.224 "product_name": "Malloc disk", 00:15:01.224 "block_size": 512, 00:15:01.224 "num_blocks": 65536, 00:15:01.224 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:01.224 "assigned_rate_limits": { 00:15:01.224 "rw_ios_per_sec": 0, 00:15:01.224 "rw_mbytes_per_sec": 0, 00:15:01.224 "r_mbytes_per_sec": 0, 00:15:01.224 "w_mbytes_per_sec": 0 00:15:01.224 }, 00:15:01.224 "claimed": true, 00:15:01.224 "claim_type": "exclusive_write", 00:15:01.224 "zoned": false, 00:15:01.224 "supported_io_types": { 00:15:01.224 "read": true, 00:15:01.224 "write": true, 00:15:01.224 "unmap": true, 00:15:01.224 "flush": true, 00:15:01.224 "reset": true, 00:15:01.224 "nvme_admin": false, 00:15:01.224 "nvme_io": false, 00:15:01.224 "nvme_io_md": false, 00:15:01.225 "write_zeroes": true, 00:15:01.225 "zcopy": true, 00:15:01.225 "get_zone_info": false, 00:15:01.225 "zone_management": false, 00:15:01.225 "zone_append": false, 00:15:01.225 "compare": false, 00:15:01.225 "compare_and_write": false, 00:15:01.225 "abort": true, 00:15:01.225 "seek_hole": false, 00:15:01.225 "seek_data": false, 00:15:01.225 "copy": true, 00:15:01.225 "nvme_iov_md": false 00:15:01.225 }, 00:15:01.225 "memory_domains": [ 00:15:01.225 { 00:15:01.225 "dma_device_id": "system", 00:15:01.225 "dma_device_type": 1 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.225 "dma_device_type": 2 00:15:01.225 } 00:15:01.225 ], 00:15:01.225 "driver_specific": {} 00:15:01.225 } 00:15:01.225 ] 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.225 "name": "Existed_Raid", 00:15:01.225 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:01.225 "strip_size_kb": 64, 00:15:01.225 "state": "configuring", 00:15:01.225 "raid_level": "concat", 00:15:01.225 "superblock": true, 00:15:01.225 "num_base_bdevs": 4, 00:15:01.225 "num_base_bdevs_discovered": 3, 00:15:01.225 "num_base_bdevs_operational": 4, 00:15:01.225 "base_bdevs_list": [ 00:15:01.225 { 00:15:01.225 "name": "BaseBdev1", 00:15:01.225 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:01.225 "is_configured": true, 00:15:01.225 "data_offset": 2048, 00:15:01.225 "data_size": 63488 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "name": null, 00:15:01.225 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:01.225 "is_configured": false, 00:15:01.225 "data_offset": 0, 00:15:01.225 "data_size": 63488 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "name": "BaseBdev3", 00:15:01.225 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:01.225 "is_configured": true, 00:15:01.225 "data_offset": 2048, 00:15:01.225 "data_size": 63488 00:15:01.225 }, 00:15:01.225 { 00:15:01.225 "name": "BaseBdev4", 00:15:01.225 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:01.225 "is_configured": true, 00:15:01.225 "data_offset": 2048, 00:15:01.225 "data_size": 63488 00:15:01.225 } 00:15:01.225 ] 00:15:01.225 }' 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.225 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 [2024-11-20 11:27:09.424283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.792 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.792 "name": "Existed_Raid", 00:15:01.792 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:01.792 "strip_size_kb": 64, 00:15:01.792 "state": "configuring", 00:15:01.792 "raid_level": "concat", 00:15:01.792 "superblock": true, 00:15:01.792 "num_base_bdevs": 4, 00:15:01.792 "num_base_bdevs_discovered": 2, 00:15:01.792 "num_base_bdevs_operational": 4, 00:15:01.792 "base_bdevs_list": [ 00:15:01.792 { 00:15:01.792 "name": "BaseBdev1", 00:15:01.792 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:01.792 "is_configured": true, 00:15:01.792 "data_offset": 2048, 00:15:01.792 "data_size": 63488 00:15:01.792 }, 00:15:01.792 { 00:15:01.792 "name": null, 00:15:01.792 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:01.792 "is_configured": false, 00:15:01.792 "data_offset": 0, 00:15:01.792 "data_size": 63488 00:15:01.792 }, 00:15:01.792 { 00:15:01.792 "name": null, 00:15:01.793 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:01.793 "is_configured": false, 00:15:01.793 "data_offset": 0, 00:15:01.793 "data_size": 63488 00:15:01.793 }, 00:15:01.793 { 00:15:01.793 "name": "BaseBdev4", 00:15:01.793 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:01.793 "is_configured": true, 00:15:01.793 "data_offset": 2048, 00:15:01.793 "data_size": 63488 00:15:01.793 } 00:15:01.793 ] 00:15:01.793 }' 00:15:01.793 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.793 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.360 11:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.360 [2024-11-20 11:27:10.000457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.360 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.361 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.361 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.361 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.361 "name": "Existed_Raid", 00:15:02.361 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:02.361 "strip_size_kb": 64, 00:15:02.361 "state": "configuring", 00:15:02.361 "raid_level": "concat", 00:15:02.361 "superblock": true, 00:15:02.361 "num_base_bdevs": 4, 00:15:02.361 "num_base_bdevs_discovered": 3, 00:15:02.361 "num_base_bdevs_operational": 4, 00:15:02.361 "base_bdevs_list": [ 00:15:02.361 { 00:15:02.361 "name": "BaseBdev1", 00:15:02.361 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:02.361 "is_configured": true, 00:15:02.361 "data_offset": 2048, 00:15:02.361 "data_size": 63488 00:15:02.361 }, 00:15:02.361 { 00:15:02.361 "name": null, 00:15:02.361 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:02.361 "is_configured": false, 00:15:02.361 "data_offset": 0, 00:15:02.361 "data_size": 63488 00:15:02.361 }, 00:15:02.361 { 00:15:02.361 "name": "BaseBdev3", 00:15:02.361 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:02.361 "is_configured": true, 00:15:02.361 "data_offset": 2048, 00:15:02.361 "data_size": 63488 00:15:02.361 }, 00:15:02.361 { 00:15:02.361 "name": "BaseBdev4", 00:15:02.361 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:02.361 "is_configured": true, 00:15:02.361 "data_offset": 2048, 00:15:02.361 "data_size": 63488 00:15:02.361 } 00:15:02.361 ] 00:15:02.361 }' 00:15:02.361 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.361 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.928 [2024-11-20 11:27:10.584650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.928 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.929 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.929 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.929 "name": "Existed_Raid", 00:15:02.929 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:02.929 "strip_size_kb": 64, 00:15:02.929 "state": "configuring", 00:15:02.929 "raid_level": "concat", 00:15:02.929 "superblock": true, 00:15:02.929 "num_base_bdevs": 4, 00:15:02.929 "num_base_bdevs_discovered": 2, 00:15:02.929 "num_base_bdevs_operational": 4, 00:15:02.929 "base_bdevs_list": [ 00:15:02.929 { 00:15:02.929 "name": null, 00:15:02.929 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:02.929 "is_configured": false, 00:15:02.929 "data_offset": 0, 00:15:02.929 "data_size": 63488 00:15:02.929 }, 00:15:02.929 { 00:15:02.929 "name": null, 00:15:02.929 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:02.929 "is_configured": false, 00:15:02.929 "data_offset": 0, 00:15:02.929 "data_size": 63488 00:15:02.929 }, 00:15:02.929 { 00:15:02.929 "name": "BaseBdev3", 00:15:02.929 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:02.929 "is_configured": true, 00:15:02.929 "data_offset": 2048, 00:15:02.929 "data_size": 63488 00:15:02.929 }, 00:15:02.929 { 00:15:02.929 "name": "BaseBdev4", 00:15:02.929 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:02.929 "is_configured": true, 00:15:02.929 "data_offset": 2048, 00:15:02.929 "data_size": 63488 00:15:02.929 } 00:15:02.929 ] 00:15:02.929 }' 00:15:02.929 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.929 11:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.497 [2024-11-20 11:27:11.207935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.497 "name": "Existed_Raid", 00:15:03.497 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:03.497 "strip_size_kb": 64, 00:15:03.497 "state": "configuring", 00:15:03.497 "raid_level": "concat", 00:15:03.497 "superblock": true, 00:15:03.497 "num_base_bdevs": 4, 00:15:03.497 "num_base_bdevs_discovered": 3, 00:15:03.497 "num_base_bdevs_operational": 4, 00:15:03.497 "base_bdevs_list": [ 00:15:03.497 { 00:15:03.497 "name": null, 00:15:03.497 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:03.497 "is_configured": false, 00:15:03.497 "data_offset": 0, 00:15:03.497 "data_size": 63488 00:15:03.497 }, 00:15:03.497 { 00:15:03.497 "name": "BaseBdev2", 00:15:03.497 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:03.497 "is_configured": true, 00:15:03.497 "data_offset": 2048, 00:15:03.497 "data_size": 63488 00:15:03.497 }, 00:15:03.497 { 00:15:03.497 "name": "BaseBdev3", 00:15:03.497 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:03.497 "is_configured": true, 00:15:03.497 "data_offset": 2048, 00:15:03.497 "data_size": 63488 00:15:03.497 }, 00:15:03.497 { 00:15:03.497 "name": "BaseBdev4", 00:15:03.497 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:03.497 "is_configured": true, 00:15:03.497 "data_offset": 2048, 00:15:03.497 "data_size": 63488 00:15:03.497 } 00:15:03.497 ] 00:15:03.497 }' 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.497 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.081 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b895fd50-b3d7-40f5-b549-35ff60c56332 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.082 [2024-11-20 11:27:11.830203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:04.082 [2024-11-20 11:27:11.830494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:04.082 [2024-11-20 11:27:11.830512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:04.082 NewBaseBdev 00:15:04.082 [2024-11-20 11:27:11.830861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:04.082 [2024-11-20 11:27:11.831053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:04.082 [2024-11-20 11:27:11.831082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:04.082 [2024-11-20 11:27:11.831236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.082 [ 00:15:04.082 { 00:15:04.082 "name": "NewBaseBdev", 00:15:04.082 "aliases": [ 00:15:04.082 "b895fd50-b3d7-40f5-b549-35ff60c56332" 00:15:04.082 ], 00:15:04.082 "product_name": "Malloc disk", 00:15:04.082 "block_size": 512, 00:15:04.082 "num_blocks": 65536, 00:15:04.082 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:04.082 "assigned_rate_limits": { 00:15:04.082 "rw_ios_per_sec": 0, 00:15:04.082 "rw_mbytes_per_sec": 0, 00:15:04.082 "r_mbytes_per_sec": 0, 00:15:04.082 "w_mbytes_per_sec": 0 00:15:04.082 }, 00:15:04.082 "claimed": true, 00:15:04.082 "claim_type": "exclusive_write", 00:15:04.082 "zoned": false, 00:15:04.082 "supported_io_types": { 00:15:04.082 "read": true, 00:15:04.082 "write": true, 00:15:04.082 "unmap": true, 00:15:04.082 "flush": true, 00:15:04.082 "reset": true, 00:15:04.082 "nvme_admin": false, 00:15:04.082 "nvme_io": false, 00:15:04.082 "nvme_io_md": false, 00:15:04.082 "write_zeroes": true, 00:15:04.082 "zcopy": true, 00:15:04.082 "get_zone_info": false, 00:15:04.082 "zone_management": false, 00:15:04.082 "zone_append": false, 00:15:04.082 "compare": false, 00:15:04.082 "compare_and_write": false, 00:15:04.082 "abort": true, 00:15:04.082 "seek_hole": false, 00:15:04.082 "seek_data": false, 00:15:04.082 "copy": true, 00:15:04.082 "nvme_iov_md": false 00:15:04.082 }, 00:15:04.082 "memory_domains": [ 00:15:04.082 { 00:15:04.082 "dma_device_id": "system", 00:15:04.082 "dma_device_type": 1 00:15:04.082 }, 00:15:04.082 { 00:15:04.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.082 "dma_device_type": 2 00:15:04.082 } 00:15:04.082 ], 00:15:04.082 "driver_specific": {} 00:15:04.082 } 00:15:04.082 ] 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.082 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.364 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.365 "name": "Existed_Raid", 00:15:04.365 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:04.365 "strip_size_kb": 64, 00:15:04.365 "state": "online", 00:15:04.365 "raid_level": "concat", 00:15:04.365 "superblock": true, 00:15:04.365 "num_base_bdevs": 4, 00:15:04.365 "num_base_bdevs_discovered": 4, 00:15:04.365 "num_base_bdevs_operational": 4, 00:15:04.365 "base_bdevs_list": [ 00:15:04.365 { 00:15:04.365 "name": "NewBaseBdev", 00:15:04.365 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:04.365 "is_configured": true, 00:15:04.365 "data_offset": 2048, 00:15:04.365 "data_size": 63488 00:15:04.365 }, 00:15:04.365 { 00:15:04.365 "name": "BaseBdev2", 00:15:04.365 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:04.365 "is_configured": true, 00:15:04.365 "data_offset": 2048, 00:15:04.365 "data_size": 63488 00:15:04.365 }, 00:15:04.365 { 00:15:04.365 "name": "BaseBdev3", 00:15:04.365 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:04.365 "is_configured": true, 00:15:04.365 "data_offset": 2048, 00:15:04.365 "data_size": 63488 00:15:04.365 }, 00:15:04.365 { 00:15:04.365 "name": "BaseBdev4", 00:15:04.365 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:04.365 "is_configured": true, 00:15:04.365 "data_offset": 2048, 00:15:04.365 "data_size": 63488 00:15:04.365 } 00:15:04.365 ] 00:15:04.365 }' 00:15:04.365 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.365 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.623 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:04.623 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:04.623 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.623 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.623 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.623 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.624 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:04.624 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.624 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.624 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.624 [2024-11-20 11:27:12.382889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.624 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.624 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.624 "name": "Existed_Raid", 00:15:04.624 "aliases": [ 00:15:04.624 "a499b778-a168-490a-be0d-b524af7ec980" 00:15:04.624 ], 00:15:04.624 "product_name": "Raid Volume", 00:15:04.624 "block_size": 512, 00:15:04.624 "num_blocks": 253952, 00:15:04.624 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:04.624 "assigned_rate_limits": { 00:15:04.624 "rw_ios_per_sec": 0, 00:15:04.624 "rw_mbytes_per_sec": 0, 00:15:04.624 "r_mbytes_per_sec": 0, 00:15:04.624 "w_mbytes_per_sec": 0 00:15:04.624 }, 00:15:04.624 "claimed": false, 00:15:04.624 "zoned": false, 00:15:04.624 "supported_io_types": { 00:15:04.624 "read": true, 00:15:04.624 "write": true, 00:15:04.624 "unmap": true, 00:15:04.624 "flush": true, 00:15:04.624 "reset": true, 00:15:04.624 "nvme_admin": false, 00:15:04.624 "nvme_io": false, 00:15:04.624 "nvme_io_md": false, 00:15:04.624 "write_zeroes": true, 00:15:04.624 "zcopy": false, 00:15:04.624 "get_zone_info": false, 00:15:04.624 "zone_management": false, 00:15:04.624 "zone_append": false, 00:15:04.624 "compare": false, 00:15:04.624 "compare_and_write": false, 00:15:04.624 "abort": false, 00:15:04.624 "seek_hole": false, 00:15:04.624 "seek_data": false, 00:15:04.624 "copy": false, 00:15:04.624 "nvme_iov_md": false 00:15:04.624 }, 00:15:04.624 "memory_domains": [ 00:15:04.624 { 00:15:04.624 "dma_device_id": "system", 00:15:04.624 "dma_device_type": 1 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.624 "dma_device_type": 2 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "dma_device_id": "system", 00:15:04.624 "dma_device_type": 1 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.624 "dma_device_type": 2 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "dma_device_id": "system", 00:15:04.624 "dma_device_type": 1 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.624 "dma_device_type": 2 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "dma_device_id": "system", 00:15:04.624 "dma_device_type": 1 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.624 "dma_device_type": 2 00:15:04.624 } 00:15:04.624 ], 00:15:04.624 "driver_specific": { 00:15:04.624 "raid": { 00:15:04.624 "uuid": "a499b778-a168-490a-be0d-b524af7ec980", 00:15:04.624 "strip_size_kb": 64, 00:15:04.624 "state": "online", 00:15:04.624 "raid_level": "concat", 00:15:04.624 "superblock": true, 00:15:04.624 "num_base_bdevs": 4, 00:15:04.624 "num_base_bdevs_discovered": 4, 00:15:04.624 "num_base_bdevs_operational": 4, 00:15:04.624 "base_bdevs_list": [ 00:15:04.624 { 00:15:04.624 "name": "NewBaseBdev", 00:15:04.624 "uuid": "b895fd50-b3d7-40f5-b549-35ff60c56332", 00:15:04.624 "is_configured": true, 00:15:04.624 "data_offset": 2048, 00:15:04.624 "data_size": 63488 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "name": "BaseBdev2", 00:15:04.624 "uuid": "4c6a937b-8e33-441d-b56e-5df6a05057a0", 00:15:04.624 "is_configured": true, 00:15:04.624 "data_offset": 2048, 00:15:04.624 "data_size": 63488 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "name": "BaseBdev3", 00:15:04.624 "uuid": "4d065ba8-315c-42d9-918a-46d4ad07b689", 00:15:04.624 "is_configured": true, 00:15:04.624 "data_offset": 2048, 00:15:04.624 "data_size": 63488 00:15:04.624 }, 00:15:04.624 { 00:15:04.624 "name": "BaseBdev4", 00:15:04.624 "uuid": "1304082a-e2a5-4a6a-aa7e-40e9868f59ef", 00:15:04.624 "is_configured": true, 00:15:04.624 "data_offset": 2048, 00:15:04.624 "data_size": 63488 00:15:04.624 } 00:15:04.624 ] 00:15:04.624 } 00:15:04.624 } 00:15:04.624 }' 00:15:04.624 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:04.883 BaseBdev2 00:15:04.883 BaseBdev3 00:15:04.883 BaseBdev4' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.883 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.142 [2024-11-20 11:27:12.734493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.142 [2024-11-20 11:27:12.734664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.142 [2024-11-20 11:27:12.734772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.142 [2024-11-20 11:27:12.734861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.142 [2024-11-20 11:27:12.734877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71997 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71997 ']' 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71997 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71997 00:15:05.142 killing process with pid 71997 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71997' 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71997 00:15:05.142 [2024-11-20 11:27:12.770245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.142 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71997 00:15:05.401 [2024-11-20 11:27:13.122464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.337 ************************************ 00:15:06.337 END TEST raid_state_function_test_sb 00:15:06.337 ************************************ 00:15:06.337 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:06.337 00:15:06.337 real 0m12.558s 00:15:06.337 user 0m20.872s 00:15:06.337 sys 0m1.720s 00:15:06.337 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.337 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.596 11:27:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:06.596 11:27:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:06.596 11:27:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.596 11:27:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.596 ************************************ 00:15:06.596 START TEST raid_superblock_test 00:15:06.596 ************************************ 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:06.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72681 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72681 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72681 ']' 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.596 11:27:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.596 [2024-11-20 11:27:14.319791] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:15:06.596 [2024-11-20 11:27:14.319984] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72681 ] 00:15:06.856 [2024-11-20 11:27:14.505746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.856 [2024-11-20 11:27:14.634102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.115 [2024-11-20 11:27:14.834584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.115 [2024-11-20 11:27:14.834673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.682 malloc1 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.682 [2024-11-20 11:27:15.358823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.682 [2024-11-20 11:27:15.359037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.682 [2024-11-20 11:27:15.359115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:07.682 [2024-11-20 11:27:15.359242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.682 [2024-11-20 11:27:15.362025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.682 [2024-11-20 11:27:15.362188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.682 pt1 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.682 malloc2 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.682 [2024-11-20 11:27:15.410562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.682 [2024-11-20 11:27:15.410647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.682 [2024-11-20 11:27:15.410682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:07.682 [2024-11-20 11:27:15.410697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.682 [2024-11-20 11:27:15.413409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.682 [2024-11-20 11:27:15.413571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.682 pt2 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:07.682 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 malloc3 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 [2024-11-20 11:27:15.477112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:07.683 [2024-11-20 11:27:15.477303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.683 [2024-11-20 11:27:15.477348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:07.683 [2024-11-20 11:27:15.477364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.683 [2024-11-20 11:27:15.480085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.683 [2024-11-20 11:27:15.480129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:07.683 pt3 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.683 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 malloc4 00:15:07.942 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.942 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:07.942 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.942 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.942 [2024-11-20 11:27:15.532798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:07.943 [2024-11-20 11:27:15.532982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.943 [2024-11-20 11:27:15.533056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:07.943 [2024-11-20 11:27:15.533161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.943 [2024-11-20 11:27:15.535922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.943 [2024-11-20 11:27:15.536073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:07.943 pt4 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.943 [2024-11-20 11:27:15.544987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.943 [2024-11-20 11:27:15.547514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.943 [2024-11-20 11:27:15.547759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.943 [2024-11-20 11:27:15.547903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:07.943 [2024-11-20 11:27:15.548161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:07.943 [2024-11-20 11:27:15.548180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:07.943 [2024-11-20 11:27:15.548519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:07.943 [2024-11-20 11:27:15.548762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:07.943 [2024-11-20 11:27:15.548784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:07.943 [2024-11-20 11:27:15.549026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.943 "name": "raid_bdev1", 00:15:07.943 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:07.943 "strip_size_kb": 64, 00:15:07.943 "state": "online", 00:15:07.943 "raid_level": "concat", 00:15:07.943 "superblock": true, 00:15:07.943 "num_base_bdevs": 4, 00:15:07.943 "num_base_bdevs_discovered": 4, 00:15:07.943 "num_base_bdevs_operational": 4, 00:15:07.943 "base_bdevs_list": [ 00:15:07.943 { 00:15:07.943 "name": "pt1", 00:15:07.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.943 "is_configured": true, 00:15:07.943 "data_offset": 2048, 00:15:07.943 "data_size": 63488 00:15:07.943 }, 00:15:07.943 { 00:15:07.943 "name": "pt2", 00:15:07.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.943 "is_configured": true, 00:15:07.943 "data_offset": 2048, 00:15:07.943 "data_size": 63488 00:15:07.943 }, 00:15:07.943 { 00:15:07.943 "name": "pt3", 00:15:07.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.943 "is_configured": true, 00:15:07.943 "data_offset": 2048, 00:15:07.943 "data_size": 63488 00:15:07.943 }, 00:15:07.943 { 00:15:07.943 "name": "pt4", 00:15:07.943 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.943 "is_configured": true, 00:15:07.943 "data_offset": 2048, 00:15:07.943 "data_size": 63488 00:15:07.943 } 00:15:07.943 ] 00:15:07.943 }' 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.943 11:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.510 [2024-11-20 11:27:16.053522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.510 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.510 "name": "raid_bdev1", 00:15:08.510 "aliases": [ 00:15:08.510 "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e" 00:15:08.510 ], 00:15:08.510 "product_name": "Raid Volume", 00:15:08.510 "block_size": 512, 00:15:08.510 "num_blocks": 253952, 00:15:08.510 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:08.510 "assigned_rate_limits": { 00:15:08.510 "rw_ios_per_sec": 0, 00:15:08.510 "rw_mbytes_per_sec": 0, 00:15:08.510 "r_mbytes_per_sec": 0, 00:15:08.510 "w_mbytes_per_sec": 0 00:15:08.510 }, 00:15:08.510 "claimed": false, 00:15:08.510 "zoned": false, 00:15:08.510 "supported_io_types": { 00:15:08.510 "read": true, 00:15:08.510 "write": true, 00:15:08.510 "unmap": true, 00:15:08.510 "flush": true, 00:15:08.510 "reset": true, 00:15:08.511 "nvme_admin": false, 00:15:08.511 "nvme_io": false, 00:15:08.511 "nvme_io_md": false, 00:15:08.511 "write_zeroes": true, 00:15:08.511 "zcopy": false, 00:15:08.511 "get_zone_info": false, 00:15:08.511 "zone_management": false, 00:15:08.511 "zone_append": false, 00:15:08.511 "compare": false, 00:15:08.511 "compare_and_write": false, 00:15:08.511 "abort": false, 00:15:08.511 "seek_hole": false, 00:15:08.511 "seek_data": false, 00:15:08.511 "copy": false, 00:15:08.511 "nvme_iov_md": false 00:15:08.511 }, 00:15:08.511 "memory_domains": [ 00:15:08.511 { 00:15:08.511 "dma_device_id": "system", 00:15:08.511 "dma_device_type": 1 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.511 "dma_device_type": 2 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "dma_device_id": "system", 00:15:08.511 "dma_device_type": 1 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.511 "dma_device_type": 2 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "dma_device_id": "system", 00:15:08.511 "dma_device_type": 1 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.511 "dma_device_type": 2 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "dma_device_id": "system", 00:15:08.511 "dma_device_type": 1 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.511 "dma_device_type": 2 00:15:08.511 } 00:15:08.511 ], 00:15:08.511 "driver_specific": { 00:15:08.511 "raid": { 00:15:08.511 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:08.511 "strip_size_kb": 64, 00:15:08.511 "state": "online", 00:15:08.511 "raid_level": "concat", 00:15:08.511 "superblock": true, 00:15:08.511 "num_base_bdevs": 4, 00:15:08.511 "num_base_bdevs_discovered": 4, 00:15:08.511 "num_base_bdevs_operational": 4, 00:15:08.511 "base_bdevs_list": [ 00:15:08.511 { 00:15:08.511 "name": "pt1", 00:15:08.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.511 "is_configured": true, 00:15:08.511 "data_offset": 2048, 00:15:08.511 "data_size": 63488 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "name": "pt2", 00:15:08.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.511 "is_configured": true, 00:15:08.511 "data_offset": 2048, 00:15:08.511 "data_size": 63488 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "name": "pt3", 00:15:08.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.511 "is_configured": true, 00:15:08.511 "data_offset": 2048, 00:15:08.511 "data_size": 63488 00:15:08.511 }, 00:15:08.511 { 00:15:08.511 "name": "pt4", 00:15:08.511 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.511 "is_configured": true, 00:15:08.511 "data_offset": 2048, 00:15:08.511 "data_size": 63488 00:15:08.511 } 00:15:08.511 ] 00:15:08.511 } 00:15:08.511 } 00:15:08.511 }' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:08.511 pt2 00:15:08.511 pt3 00:15:08.511 pt4' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.511 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:08.770 [2024-11-20 11:27:16.421581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dc618ee0-b9a5-42a1-ae3e-83cc2e71652e 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dc618ee0-b9a5-42a1-ae3e-83cc2e71652e ']' 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 [2024-11-20 11:27:16.473245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.770 [2024-11-20 11:27:16.473280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.770 [2024-11-20 11:27:16.473387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.770 [2024-11-20 11:27:16.473480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.770 [2024-11-20 11:27:16.473513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.029 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.030 [2024-11-20 11:27:16.629286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:09.030 [2024-11-20 11:27:16.631880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:09.030 [2024-11-20 11:27:16.632069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:09.030 [2024-11-20 11:27:16.632173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:09.030 [2024-11-20 11:27:16.632378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:09.030 [2024-11-20 11:27:16.632577] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:09.030 [2024-11-20 11:27:16.632869] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:09.030 [2024-11-20 11:27:16.633119] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:09.030 [2024-11-20 11:27:16.633329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.030 [2024-11-20 11:27:16.633442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:09.030 request: 00:15:09.030 { 00:15:09.030 "name": "raid_bdev1", 00:15:09.030 "raid_level": "concat", 00:15:09.030 "base_bdevs": [ 00:15:09.030 "malloc1", 00:15:09.030 "malloc2", 00:15:09.030 "malloc3", 00:15:09.030 "malloc4" 00:15:09.030 ], 00:15:09.030 "strip_size_kb": 64, 00:15:09.030 "superblock": false, 00:15:09.030 "method": "bdev_raid_create", 00:15:09.030 "req_id": 1 00:15:09.030 } 00:15:09.030 Got JSON-RPC error response 00:15:09.030 response: 00:15:09.030 { 00:15:09.030 "code": -17, 00:15:09.030 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:09.030 } 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.030 [2024-11-20 11:27:16.693844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:09.030 [2024-11-20 11:27:16.694046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.030 [2024-11-20 11:27:16.694182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.030 [2024-11-20 11:27:16.694212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.030 [2024-11-20 11:27:16.697083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.030 [2024-11-20 11:27:16.697136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:09.030 [2024-11-20 11:27:16.697246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:09.030 [2024-11-20 11:27:16.697331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.030 pt1 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.030 "name": "raid_bdev1", 00:15:09.030 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:09.030 "strip_size_kb": 64, 00:15:09.030 "state": "configuring", 00:15:09.030 "raid_level": "concat", 00:15:09.030 "superblock": true, 00:15:09.030 "num_base_bdevs": 4, 00:15:09.030 "num_base_bdevs_discovered": 1, 00:15:09.030 "num_base_bdevs_operational": 4, 00:15:09.030 "base_bdevs_list": [ 00:15:09.030 { 00:15:09.030 "name": "pt1", 00:15:09.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.030 "is_configured": true, 00:15:09.030 "data_offset": 2048, 00:15:09.030 "data_size": 63488 00:15:09.030 }, 00:15:09.030 { 00:15:09.030 "name": null, 00:15:09.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.030 "is_configured": false, 00:15:09.030 "data_offset": 2048, 00:15:09.030 "data_size": 63488 00:15:09.030 }, 00:15:09.030 { 00:15:09.030 "name": null, 00:15:09.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.030 "is_configured": false, 00:15:09.030 "data_offset": 2048, 00:15:09.030 "data_size": 63488 00:15:09.030 }, 00:15:09.030 { 00:15:09.030 "name": null, 00:15:09.030 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.030 "is_configured": false, 00:15:09.030 "data_offset": 2048, 00:15:09.030 "data_size": 63488 00:15:09.030 } 00:15:09.030 ] 00:15:09.030 }' 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.030 11:27:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.597 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:09.597 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.597 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.597 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.597 [2024-11-20 11:27:17.238006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.597 [2024-11-20 11:27:17.238241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.598 [2024-11-20 11:27:17.238281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:09.598 [2024-11-20 11:27:17.238300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.598 [2024-11-20 11:27:17.238865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.598 [2024-11-20 11:27:17.238902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.598 [2024-11-20 11:27:17.239004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.598 [2024-11-20 11:27:17.239041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.598 pt2 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.598 [2024-11-20 11:27:17.246000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.598 "name": "raid_bdev1", 00:15:09.598 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:09.598 "strip_size_kb": 64, 00:15:09.598 "state": "configuring", 00:15:09.598 "raid_level": "concat", 00:15:09.598 "superblock": true, 00:15:09.598 "num_base_bdevs": 4, 00:15:09.598 "num_base_bdevs_discovered": 1, 00:15:09.598 "num_base_bdevs_operational": 4, 00:15:09.598 "base_bdevs_list": [ 00:15:09.598 { 00:15:09.598 "name": "pt1", 00:15:09.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.598 "is_configured": true, 00:15:09.598 "data_offset": 2048, 00:15:09.598 "data_size": 63488 00:15:09.598 }, 00:15:09.598 { 00:15:09.598 "name": null, 00:15:09.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.598 "is_configured": false, 00:15:09.598 "data_offset": 0, 00:15:09.598 "data_size": 63488 00:15:09.598 }, 00:15:09.598 { 00:15:09.598 "name": null, 00:15:09.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.598 "is_configured": false, 00:15:09.598 "data_offset": 2048, 00:15:09.598 "data_size": 63488 00:15:09.598 }, 00:15:09.598 { 00:15:09.598 "name": null, 00:15:09.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.598 "is_configured": false, 00:15:09.598 "data_offset": 2048, 00:15:09.598 "data_size": 63488 00:15:09.598 } 00:15:09.598 ] 00:15:09.598 }' 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.598 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 [2024-11-20 11:27:17.762148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.167 [2024-11-20 11:27:17.762227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.167 [2024-11-20 11:27:17.762259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:10.167 [2024-11-20 11:27:17.762273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.167 [2024-11-20 11:27:17.762842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.167 [2024-11-20 11:27:17.762868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.167 [2024-11-20 11:27:17.762975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.167 [2024-11-20 11:27:17.763007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.167 pt2 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 [2024-11-20 11:27:17.774111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.167 [2024-11-20 11:27:17.774299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.167 [2024-11-20 11:27:17.774378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:10.167 [2024-11-20 11:27:17.774503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.167 [2024-11-20 11:27:17.775069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.167 [2024-11-20 11:27:17.775212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.167 [2024-11-20 11:27:17.775323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.167 [2024-11-20 11:27:17.775356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.167 pt3 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 [2024-11-20 11:27:17.782087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:10.167 [2024-11-20 11:27:17.782148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.167 [2024-11-20 11:27:17.782178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:10.167 [2024-11-20 11:27:17.782192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.167 [2024-11-20 11:27:17.782664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.167 [2024-11-20 11:27:17.782693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:10.167 [2024-11-20 11:27:17.782780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:10.167 [2024-11-20 11:27:17.782809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:10.167 [2024-11-20 11:27:17.782976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:10.167 [2024-11-20 11:27:17.782992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:10.167 [2024-11-20 11:27:17.783300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:10.167 [2024-11-20 11:27:17.783485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:10.167 [2024-11-20 11:27:17.783506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:10.167 [2024-11-20 11:27:17.783684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.167 pt4 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.167 "name": "raid_bdev1", 00:15:10.167 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:10.167 "strip_size_kb": 64, 00:15:10.167 "state": "online", 00:15:10.167 "raid_level": "concat", 00:15:10.167 "superblock": true, 00:15:10.167 "num_base_bdevs": 4, 00:15:10.167 "num_base_bdevs_discovered": 4, 00:15:10.167 "num_base_bdevs_operational": 4, 00:15:10.167 "base_bdevs_list": [ 00:15:10.167 { 00:15:10.167 "name": "pt1", 00:15:10.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.167 "is_configured": true, 00:15:10.167 "data_offset": 2048, 00:15:10.167 "data_size": 63488 00:15:10.167 }, 00:15:10.167 { 00:15:10.167 "name": "pt2", 00:15:10.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.167 "is_configured": true, 00:15:10.167 "data_offset": 2048, 00:15:10.167 "data_size": 63488 00:15:10.167 }, 00:15:10.167 { 00:15:10.167 "name": "pt3", 00:15:10.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.167 "is_configured": true, 00:15:10.167 "data_offset": 2048, 00:15:10.167 "data_size": 63488 00:15:10.167 }, 00:15:10.167 { 00:15:10.167 "name": "pt4", 00:15:10.167 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.167 "is_configured": true, 00:15:10.167 "data_offset": 2048, 00:15:10.167 "data_size": 63488 00:15:10.167 } 00:15:10.167 ] 00:15:10.167 }' 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.167 11:27:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.735 [2024-11-20 11:27:18.322716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.735 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.735 "name": "raid_bdev1", 00:15:10.735 "aliases": [ 00:15:10.735 "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e" 00:15:10.735 ], 00:15:10.735 "product_name": "Raid Volume", 00:15:10.735 "block_size": 512, 00:15:10.735 "num_blocks": 253952, 00:15:10.735 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:10.735 "assigned_rate_limits": { 00:15:10.735 "rw_ios_per_sec": 0, 00:15:10.735 "rw_mbytes_per_sec": 0, 00:15:10.735 "r_mbytes_per_sec": 0, 00:15:10.735 "w_mbytes_per_sec": 0 00:15:10.735 }, 00:15:10.735 "claimed": false, 00:15:10.735 "zoned": false, 00:15:10.735 "supported_io_types": { 00:15:10.735 "read": true, 00:15:10.735 "write": true, 00:15:10.736 "unmap": true, 00:15:10.736 "flush": true, 00:15:10.736 "reset": true, 00:15:10.736 "nvme_admin": false, 00:15:10.736 "nvme_io": false, 00:15:10.736 "nvme_io_md": false, 00:15:10.736 "write_zeroes": true, 00:15:10.736 "zcopy": false, 00:15:10.736 "get_zone_info": false, 00:15:10.736 "zone_management": false, 00:15:10.736 "zone_append": false, 00:15:10.736 "compare": false, 00:15:10.736 "compare_and_write": false, 00:15:10.736 "abort": false, 00:15:10.736 "seek_hole": false, 00:15:10.736 "seek_data": false, 00:15:10.736 "copy": false, 00:15:10.736 "nvme_iov_md": false 00:15:10.736 }, 00:15:10.736 "memory_domains": [ 00:15:10.736 { 00:15:10.736 "dma_device_id": "system", 00:15:10.736 "dma_device_type": 1 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.736 "dma_device_type": 2 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "dma_device_id": "system", 00:15:10.736 "dma_device_type": 1 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.736 "dma_device_type": 2 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "dma_device_id": "system", 00:15:10.736 "dma_device_type": 1 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.736 "dma_device_type": 2 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "dma_device_id": "system", 00:15:10.736 "dma_device_type": 1 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.736 "dma_device_type": 2 00:15:10.736 } 00:15:10.736 ], 00:15:10.736 "driver_specific": { 00:15:10.736 "raid": { 00:15:10.736 "uuid": "dc618ee0-b9a5-42a1-ae3e-83cc2e71652e", 00:15:10.736 "strip_size_kb": 64, 00:15:10.736 "state": "online", 00:15:10.736 "raid_level": "concat", 00:15:10.736 "superblock": true, 00:15:10.736 "num_base_bdevs": 4, 00:15:10.736 "num_base_bdevs_discovered": 4, 00:15:10.736 "num_base_bdevs_operational": 4, 00:15:10.736 "base_bdevs_list": [ 00:15:10.736 { 00:15:10.736 "name": "pt1", 00:15:10.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.736 "is_configured": true, 00:15:10.736 "data_offset": 2048, 00:15:10.736 "data_size": 63488 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "name": "pt2", 00:15:10.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.736 "is_configured": true, 00:15:10.736 "data_offset": 2048, 00:15:10.736 "data_size": 63488 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "name": "pt3", 00:15:10.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.736 "is_configured": true, 00:15:10.736 "data_offset": 2048, 00:15:10.736 "data_size": 63488 00:15:10.736 }, 00:15:10.736 { 00:15:10.736 "name": "pt4", 00:15:10.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.736 "is_configured": true, 00:15:10.736 "data_offset": 2048, 00:15:10.736 "data_size": 63488 00:15:10.736 } 00:15:10.736 ] 00:15:10.736 } 00:15:10.736 } 00:15:10.736 }' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:10.736 pt2 00:15:10.736 pt3 00:15:10.736 pt4' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.736 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.995 [2024-11-20 11:27:18.682792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dc618ee0-b9a5-42a1-ae3e-83cc2e71652e '!=' dc618ee0-b9a5-42a1-ae3e-83cc2e71652e ']' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72681 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72681 ']' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72681 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72681 00:15:10.995 killing process with pid 72681 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72681' 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72681 00:15:10.995 [2024-11-20 11:27:18.751610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.995 11:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72681 00:15:10.995 [2024-11-20 11:27:18.751734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.995 [2024-11-20 11:27:18.751840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.995 [2024-11-20 11:27:18.751856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:11.577 [2024-11-20 11:27:19.111779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.512 11:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:12.512 00:15:12.512 real 0m5.929s 00:15:12.512 user 0m8.942s 00:15:12.512 sys 0m0.866s 00:15:12.512 ************************************ 00:15:12.512 11:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.512 11:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.512 END TEST raid_superblock_test 00:15:12.512 ************************************ 00:15:12.512 11:27:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:12.512 11:27:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:12.512 11:27:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.512 11:27:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.512 ************************************ 00:15:12.512 START TEST raid_read_error_test 00:15:12.512 ************************************ 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ULGqkRTbTO 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72951 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72951 00:15:12.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72951 ']' 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.512 11:27:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.512 [2024-11-20 11:27:20.294267] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:15:12.512 [2024-11-20 11:27:20.294687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72951 ] 00:15:12.771 [2024-11-20 11:27:20.474644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.030 [2024-11-20 11:27:20.628510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.030 [2024-11-20 11:27:20.845997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.030 [2024-11-20 11:27:20.847131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.598 BaseBdev1_malloc 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.598 true 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.598 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:13.599 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.599 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.599 [2024-11-20 11:27:21.415942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:13.599 [2024-11-20 11:27:21.416157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.599 [2024-11-20 11:27:21.416200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:13.599 [2024-11-20 11:27:21.416221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.599 [2024-11-20 11:27:21.419096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.599 [2024-11-20 11:27:21.419148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.599 BaseBdev1 00:15:13.599 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.599 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.599 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.599 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.599 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 BaseBdev2_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 true 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 [2024-11-20 11:27:21.480066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:13.858 [2024-11-20 11:27:21.480142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.858 [2024-11-20 11:27:21.480170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:13.858 [2024-11-20 11:27:21.480187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.858 [2024-11-20 11:27:21.483006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.858 [2024-11-20 11:27:21.483055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.858 BaseBdev2 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 BaseBdev3_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 true 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 [2024-11-20 11:27:21.561236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:13.858 [2024-11-20 11:27:21.561313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.858 [2024-11-20 11:27:21.561345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:13.858 [2024-11-20 11:27:21.561363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.858 [2024-11-20 11:27:21.564269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.858 [2024-11-20 11:27:21.564320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:13.858 BaseBdev3 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 BaseBdev4_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 true 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 [2024-11-20 11:27:21.629213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:13.858 [2024-11-20 11:27:21.629417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.858 [2024-11-20 11:27:21.629491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:13.858 [2024-11-20 11:27:21.629607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.858 [2024-11-20 11:27:21.632533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.858 [2024-11-20 11:27:21.632715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:13.858 BaseBdev4 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.858 [2024-11-20 11:27:21.641424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.858 [2024-11-20 11:27:21.643917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.858 [2024-11-20 11:27:21.644164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.858 [2024-11-20 11:27:21.644279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:13.858 [2024-11-20 11:27:21.644596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:13.858 [2024-11-20 11:27:21.644636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:13.858 [2024-11-20 11:27:21.644977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:13.858 [2024-11-20 11:27:21.645208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:13.858 [2024-11-20 11:27:21.645229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:13.858 [2024-11-20 11:27:21.645498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.858 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.859 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.117 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.117 "name": "raid_bdev1", 00:15:14.117 "uuid": "561f4ad1-ee35-4f9b-a2ec-57e1ae08e222", 00:15:14.117 "strip_size_kb": 64, 00:15:14.117 "state": "online", 00:15:14.117 "raid_level": "concat", 00:15:14.117 "superblock": true, 00:15:14.117 "num_base_bdevs": 4, 00:15:14.117 "num_base_bdevs_discovered": 4, 00:15:14.117 "num_base_bdevs_operational": 4, 00:15:14.117 "base_bdevs_list": [ 00:15:14.117 { 00:15:14.117 "name": "BaseBdev1", 00:15:14.117 "uuid": "b284eaca-165c-5d77-99fc-030a4868e0cb", 00:15:14.117 "is_configured": true, 00:15:14.117 "data_offset": 2048, 00:15:14.117 "data_size": 63488 00:15:14.117 }, 00:15:14.117 { 00:15:14.117 "name": "BaseBdev2", 00:15:14.117 "uuid": "7aa4943d-c70d-5a65-a773-c00504c58a59", 00:15:14.117 "is_configured": true, 00:15:14.117 "data_offset": 2048, 00:15:14.117 "data_size": 63488 00:15:14.117 }, 00:15:14.117 { 00:15:14.117 "name": "BaseBdev3", 00:15:14.117 "uuid": "bb942808-7692-5a3e-9eb3-43deac336e1a", 00:15:14.117 "is_configured": true, 00:15:14.117 "data_offset": 2048, 00:15:14.117 "data_size": 63488 00:15:14.117 }, 00:15:14.117 { 00:15:14.117 "name": "BaseBdev4", 00:15:14.117 "uuid": "e2abe1fd-e88c-51e8-a707-77ec45c38a98", 00:15:14.117 "is_configured": true, 00:15:14.117 "data_offset": 2048, 00:15:14.117 "data_size": 63488 00:15:14.117 } 00:15:14.117 ] 00:15:14.117 }' 00:15:14.117 11:27:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.117 11:27:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.376 11:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:14.376 11:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:14.635 [2024-11-20 11:27:22.319065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.570 "name": "raid_bdev1", 00:15:15.570 "uuid": "561f4ad1-ee35-4f9b-a2ec-57e1ae08e222", 00:15:15.570 "strip_size_kb": 64, 00:15:15.570 "state": "online", 00:15:15.570 "raid_level": "concat", 00:15:15.570 "superblock": true, 00:15:15.570 "num_base_bdevs": 4, 00:15:15.570 "num_base_bdevs_discovered": 4, 00:15:15.570 "num_base_bdevs_operational": 4, 00:15:15.570 "base_bdevs_list": [ 00:15:15.570 { 00:15:15.570 "name": "BaseBdev1", 00:15:15.570 "uuid": "b284eaca-165c-5d77-99fc-030a4868e0cb", 00:15:15.570 "is_configured": true, 00:15:15.570 "data_offset": 2048, 00:15:15.570 "data_size": 63488 00:15:15.570 }, 00:15:15.570 { 00:15:15.570 "name": "BaseBdev2", 00:15:15.570 "uuid": "7aa4943d-c70d-5a65-a773-c00504c58a59", 00:15:15.570 "is_configured": true, 00:15:15.570 "data_offset": 2048, 00:15:15.570 "data_size": 63488 00:15:15.570 }, 00:15:15.570 { 00:15:15.570 "name": "BaseBdev3", 00:15:15.570 "uuid": "bb942808-7692-5a3e-9eb3-43deac336e1a", 00:15:15.570 "is_configured": true, 00:15:15.570 "data_offset": 2048, 00:15:15.570 "data_size": 63488 00:15:15.570 }, 00:15:15.570 { 00:15:15.570 "name": "BaseBdev4", 00:15:15.570 "uuid": "e2abe1fd-e88c-51e8-a707-77ec45c38a98", 00:15:15.570 "is_configured": true, 00:15:15.570 "data_offset": 2048, 00:15:15.570 "data_size": 63488 00:15:15.570 } 00:15:15.570 ] 00:15:15.570 }' 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.570 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.136 [2024-11-20 11:27:23.738828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.136 [2024-11-20 11:27:23.739006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.136 [2024-11-20 11:27:23.742421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.136 [2024-11-20 11:27:23.742640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.136 [2024-11-20 11:27:23.742834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.136 [2024-11-20 11:27:23.743007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:16.136 { 00:15:16.136 "results": [ 00:15:16.136 { 00:15:16.136 "job": "raid_bdev1", 00:15:16.136 "core_mask": "0x1", 00:15:16.136 "workload": "randrw", 00:15:16.136 "percentage": 50, 00:15:16.136 "status": "finished", 00:15:16.136 "queue_depth": 1, 00:15:16.136 "io_size": 131072, 00:15:16.136 "runtime": 1.417471, 00:15:16.136 "iops": 10485.576071750322, 00:15:16.136 "mibps": 1310.6970089687902, 00:15:16.136 "io_failed": 1, 00:15:16.136 "io_timeout": 0, 00:15:16.136 "avg_latency_us": 133.1898855073882, 00:15:16.136 "min_latency_us": 43.52, 00:15:16.136 "max_latency_us": 1809.6872727272728 00:15:16.136 } 00:15:16.136 ], 00:15:16.136 "core_count": 1 00:15:16.136 } 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72951 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72951 ']' 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72951 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72951 00:15:16.136 killing process with pid 72951 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72951' 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72951 00:15:16.136 [2024-11-20 11:27:23.778520] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.136 11:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72951 00:15:16.395 [2024-11-20 11:27:24.074744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ULGqkRTbTO 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:17.330 00:15:17.330 real 0m4.974s 00:15:17.330 user 0m6.202s 00:15:17.330 sys 0m0.595s 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.330 ************************************ 00:15:17.330 END TEST raid_read_error_test 00:15:17.330 ************************************ 00:15:17.330 11:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.588 11:27:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:17.588 11:27:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:17.588 11:27:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.588 11:27:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.588 ************************************ 00:15:17.588 START TEST raid_write_error_test 00:15:17.588 ************************************ 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hXnBbxDclE 00:15:17.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73097 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73097 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73097 ']' 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.588 11:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.588 [2024-11-20 11:27:25.326266] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:15:17.588 [2024-11-20 11:27:25.326419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73097 ] 00:15:17.846 [2024-11-20 11:27:25.505609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.846 [2024-11-20 11:27:25.661225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.104 [2024-11-20 11:27:25.866199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.104 [2024-11-20 11:27:25.866256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 BaseBdev1_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 true 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 [2024-11-20 11:27:26.370637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:18.671 [2024-11-20 11:27:26.370707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.671 [2024-11-20 11:27:26.370737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:18.671 [2024-11-20 11:27:26.370756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.671 [2024-11-20 11:27:26.373596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.671 [2024-11-20 11:27:26.373668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.671 BaseBdev1 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 BaseBdev2_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 true 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 [2024-11-20 11:27:26.434645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:18.671 [2024-11-20 11:27:26.434886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.671 [2024-11-20 11:27:26.434926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:18.671 [2024-11-20 11:27:26.434946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.671 [2024-11-20 11:27:26.437778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.671 [2024-11-20 11:27:26.437828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.671 BaseBdev2 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 BaseBdev3_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 true 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.671 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.671 [2024-11-20 11:27:26.512131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:18.671 [2024-11-20 11:27:26.512204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.671 [2024-11-20 11:27:26.512233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:18.671 [2024-11-20 11:27:26.512251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.929 [2024-11-20 11:27:26.515163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.929 [2024-11-20 11:27:26.515215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:18.929 BaseBdev3 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.929 BaseBdev4_malloc 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.929 true 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.929 [2024-11-20 11:27:26.581887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:18.929 [2024-11-20 11:27:26.581968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.929 [2024-11-20 11:27:26.582000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:18.929 [2024-11-20 11:27:26.582019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.929 [2024-11-20 11:27:26.584934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.929 [2024-11-20 11:27:26.584987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:18.929 BaseBdev4 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.929 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.930 [2024-11-20 11:27:26.594041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.930 [2024-11-20 11:27:26.596520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.930 [2024-11-20 11:27:26.596791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.930 [2024-11-20 11:27:26.596911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.930 [2024-11-20 11:27:26.597227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:18.930 [2024-11-20 11:27:26.597252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:18.930 [2024-11-20 11:27:26.597630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:18.930 [2024-11-20 11:27:26.597848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:18.930 [2024-11-20 11:27:26.597867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:18.930 [2024-11-20 11:27:26.598143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.930 "name": "raid_bdev1", 00:15:18.930 "uuid": "786c0c2c-4bb5-42e2-82f1-3bfcbb55c0f7", 00:15:18.930 "strip_size_kb": 64, 00:15:18.930 "state": "online", 00:15:18.930 "raid_level": "concat", 00:15:18.930 "superblock": true, 00:15:18.930 "num_base_bdevs": 4, 00:15:18.930 "num_base_bdevs_discovered": 4, 00:15:18.930 "num_base_bdevs_operational": 4, 00:15:18.930 "base_bdevs_list": [ 00:15:18.930 { 00:15:18.930 "name": "BaseBdev1", 00:15:18.930 "uuid": "f22df5a0-bc7e-5356-8e32-756900d5b29a", 00:15:18.930 "is_configured": true, 00:15:18.930 "data_offset": 2048, 00:15:18.930 "data_size": 63488 00:15:18.930 }, 00:15:18.930 { 00:15:18.930 "name": "BaseBdev2", 00:15:18.930 "uuid": "384619b5-4dad-58bb-97df-acafb27fd119", 00:15:18.930 "is_configured": true, 00:15:18.930 "data_offset": 2048, 00:15:18.930 "data_size": 63488 00:15:18.930 }, 00:15:18.930 { 00:15:18.930 "name": "BaseBdev3", 00:15:18.930 "uuid": "c7ec5a0f-d6a8-5645-898e-dea3c1a20df0", 00:15:18.930 "is_configured": true, 00:15:18.930 "data_offset": 2048, 00:15:18.930 "data_size": 63488 00:15:18.930 }, 00:15:18.930 { 00:15:18.930 "name": "BaseBdev4", 00:15:18.930 "uuid": "ac9bf46b-2184-5622-9c8a-a548d2e35fa0", 00:15:18.930 "is_configured": true, 00:15:18.930 "data_offset": 2048, 00:15:18.930 "data_size": 63488 00:15:18.930 } 00:15:18.930 ] 00:15:18.930 }' 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.930 11:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.496 11:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:19.496 11:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:19.496 [2024-11-20 11:27:27.243676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.435 "name": "raid_bdev1", 00:15:20.435 "uuid": "786c0c2c-4bb5-42e2-82f1-3bfcbb55c0f7", 00:15:20.435 "strip_size_kb": 64, 00:15:20.435 "state": "online", 00:15:20.435 "raid_level": "concat", 00:15:20.435 "superblock": true, 00:15:20.435 "num_base_bdevs": 4, 00:15:20.435 "num_base_bdevs_discovered": 4, 00:15:20.435 "num_base_bdevs_operational": 4, 00:15:20.435 "base_bdevs_list": [ 00:15:20.435 { 00:15:20.435 "name": "BaseBdev1", 00:15:20.435 "uuid": "f22df5a0-bc7e-5356-8e32-756900d5b29a", 00:15:20.435 "is_configured": true, 00:15:20.435 "data_offset": 2048, 00:15:20.435 "data_size": 63488 00:15:20.435 }, 00:15:20.435 { 00:15:20.435 "name": "BaseBdev2", 00:15:20.435 "uuid": "384619b5-4dad-58bb-97df-acafb27fd119", 00:15:20.435 "is_configured": true, 00:15:20.435 "data_offset": 2048, 00:15:20.435 "data_size": 63488 00:15:20.435 }, 00:15:20.435 { 00:15:20.435 "name": "BaseBdev3", 00:15:20.435 "uuid": "c7ec5a0f-d6a8-5645-898e-dea3c1a20df0", 00:15:20.435 "is_configured": true, 00:15:20.435 "data_offset": 2048, 00:15:20.435 "data_size": 63488 00:15:20.435 }, 00:15:20.435 { 00:15:20.435 "name": "BaseBdev4", 00:15:20.435 "uuid": "ac9bf46b-2184-5622-9c8a-a548d2e35fa0", 00:15:20.435 "is_configured": true, 00:15:20.435 "data_offset": 2048, 00:15:20.435 "data_size": 63488 00:15:20.435 } 00:15:20.435 ] 00:15:20.435 }' 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.435 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.003 [2024-11-20 11:27:28.649730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.003 [2024-11-20 11:27:28.649927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.003 [2024-11-20 11:27:28.653320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.003 [2024-11-20 11:27:28.653523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.003 [2024-11-20 11:27:28.653600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.003 [2024-11-20 11:27:28.653649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:21.003 { 00:15:21.003 "results": [ 00:15:21.003 { 00:15:21.003 "job": "raid_bdev1", 00:15:21.003 "core_mask": "0x1", 00:15:21.003 "workload": "randrw", 00:15:21.003 "percentage": 50, 00:15:21.003 "status": "finished", 00:15:21.003 "queue_depth": 1, 00:15:21.003 "io_size": 131072, 00:15:21.003 "runtime": 1.403678, 00:15:21.003 "iops": 10538.741791208526, 00:15:21.003 "mibps": 1317.3427239010657, 00:15:21.003 "io_failed": 1, 00:15:21.003 "io_timeout": 0, 00:15:21.003 "avg_latency_us": 132.54999594430177, 00:15:21.003 "min_latency_us": 43.28727272727273, 00:15:21.003 "max_latency_us": 1824.581818181818 00:15:21.003 } 00:15:21.003 ], 00:15:21.003 "core_count": 1 00:15:21.003 } 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73097 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73097 ']' 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73097 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73097 00:15:21.003 killing process with pid 73097 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73097' 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73097 00:15:21.003 [2024-11-20 11:27:28.683789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.003 11:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73097 00:15:21.263 [2024-11-20 11:27:28.978559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hXnBbxDclE 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:22.644 ************************************ 00:15:22.644 END TEST raid_write_error_test 00:15:22.644 ************************************ 00:15:22.644 00:15:22.644 real 0m4.865s 00:15:22.644 user 0m6.028s 00:15:22.644 sys 0m0.558s 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.644 11:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.644 11:27:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:22.644 11:27:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:22.644 11:27:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.644 11:27:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.644 11:27:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.644 ************************************ 00:15:22.644 START TEST raid_state_function_test 00:15:22.644 ************************************ 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:22.644 Process raid pid: 73235 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73235 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73235' 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73235 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73235 ']' 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.644 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.644 [2024-11-20 11:27:30.229676] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:15:22.644 [2024-11-20 11:27:30.230006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.644 [2024-11-20 11:27:30.405505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.904 [2024-11-20 11:27:30.538696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.904 [2024-11-20 11:27:30.747873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.163 [2024-11-20 11:27:30.748131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.451 [2024-11-20 11:27:31.258530] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.451 [2024-11-20 11:27:31.258597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.451 [2024-11-20 11:27:31.258627] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.451 [2024-11-20 11:27:31.258646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.451 [2024-11-20 11:27:31.258658] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.451 [2024-11-20 11:27:31.258672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.451 [2024-11-20 11:27:31.258682] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.451 [2024-11-20 11:27:31.258696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.451 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.710 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.710 "name": "Existed_Raid", 00:15:23.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.710 "strip_size_kb": 0, 00:15:23.710 "state": "configuring", 00:15:23.710 "raid_level": "raid1", 00:15:23.710 "superblock": false, 00:15:23.710 "num_base_bdevs": 4, 00:15:23.710 "num_base_bdevs_discovered": 0, 00:15:23.710 "num_base_bdevs_operational": 4, 00:15:23.710 "base_bdevs_list": [ 00:15:23.710 { 00:15:23.710 "name": "BaseBdev1", 00:15:23.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.710 "is_configured": false, 00:15:23.710 "data_offset": 0, 00:15:23.710 "data_size": 0 00:15:23.710 }, 00:15:23.710 { 00:15:23.710 "name": "BaseBdev2", 00:15:23.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.710 "is_configured": false, 00:15:23.710 "data_offset": 0, 00:15:23.710 "data_size": 0 00:15:23.710 }, 00:15:23.710 { 00:15:23.710 "name": "BaseBdev3", 00:15:23.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.710 "is_configured": false, 00:15:23.710 "data_offset": 0, 00:15:23.710 "data_size": 0 00:15:23.710 }, 00:15:23.710 { 00:15:23.710 "name": "BaseBdev4", 00:15:23.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.710 "is_configured": false, 00:15:23.710 "data_offset": 0, 00:15:23.710 "data_size": 0 00:15:23.710 } 00:15:23.710 ] 00:15:23.710 }' 00:15:23.710 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.710 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.969 [2024-11-20 11:27:31.770644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.969 [2024-11-20 11:27:31.770691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.969 [2024-11-20 11:27:31.778587] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.969 [2024-11-20 11:27:31.778653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.969 [2024-11-20 11:27:31.778669] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.969 [2024-11-20 11:27:31.778686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.969 [2024-11-20 11:27:31.778696] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.969 [2024-11-20 11:27:31.778710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.969 [2024-11-20 11:27:31.778720] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.969 [2024-11-20 11:27:31.778734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.969 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.228 [2024-11-20 11:27:31.823377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.228 BaseBdev1 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.228 [ 00:15:24.228 { 00:15:24.228 "name": "BaseBdev1", 00:15:24.228 "aliases": [ 00:15:24.228 "8633526a-3e59-4c29-935c-21cc3d5d9eef" 00:15:24.228 ], 00:15:24.228 "product_name": "Malloc disk", 00:15:24.228 "block_size": 512, 00:15:24.228 "num_blocks": 65536, 00:15:24.228 "uuid": "8633526a-3e59-4c29-935c-21cc3d5d9eef", 00:15:24.228 "assigned_rate_limits": { 00:15:24.228 "rw_ios_per_sec": 0, 00:15:24.228 "rw_mbytes_per_sec": 0, 00:15:24.228 "r_mbytes_per_sec": 0, 00:15:24.228 "w_mbytes_per_sec": 0 00:15:24.228 }, 00:15:24.228 "claimed": true, 00:15:24.228 "claim_type": "exclusive_write", 00:15:24.228 "zoned": false, 00:15:24.228 "supported_io_types": { 00:15:24.228 "read": true, 00:15:24.228 "write": true, 00:15:24.228 "unmap": true, 00:15:24.228 "flush": true, 00:15:24.228 "reset": true, 00:15:24.228 "nvme_admin": false, 00:15:24.228 "nvme_io": false, 00:15:24.228 "nvme_io_md": false, 00:15:24.228 "write_zeroes": true, 00:15:24.228 "zcopy": true, 00:15:24.228 "get_zone_info": false, 00:15:24.228 "zone_management": false, 00:15:24.228 "zone_append": false, 00:15:24.228 "compare": false, 00:15:24.228 "compare_and_write": false, 00:15:24.228 "abort": true, 00:15:24.228 "seek_hole": false, 00:15:24.228 "seek_data": false, 00:15:24.228 "copy": true, 00:15:24.228 "nvme_iov_md": false 00:15:24.228 }, 00:15:24.228 "memory_domains": [ 00:15:24.228 { 00:15:24.228 "dma_device_id": "system", 00:15:24.228 "dma_device_type": 1 00:15:24.228 }, 00:15:24.228 { 00:15:24.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.228 "dma_device_type": 2 00:15:24.228 } 00:15:24.228 ], 00:15:24.228 "driver_specific": {} 00:15:24.228 } 00:15:24.228 ] 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.228 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.229 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.229 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.229 "name": "Existed_Raid", 00:15:24.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.229 "strip_size_kb": 0, 00:15:24.229 "state": "configuring", 00:15:24.229 "raid_level": "raid1", 00:15:24.229 "superblock": false, 00:15:24.229 "num_base_bdevs": 4, 00:15:24.229 "num_base_bdevs_discovered": 1, 00:15:24.229 "num_base_bdevs_operational": 4, 00:15:24.229 "base_bdevs_list": [ 00:15:24.229 { 00:15:24.229 "name": "BaseBdev1", 00:15:24.229 "uuid": "8633526a-3e59-4c29-935c-21cc3d5d9eef", 00:15:24.229 "is_configured": true, 00:15:24.229 "data_offset": 0, 00:15:24.229 "data_size": 65536 00:15:24.229 }, 00:15:24.229 { 00:15:24.229 "name": "BaseBdev2", 00:15:24.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.229 "is_configured": false, 00:15:24.229 "data_offset": 0, 00:15:24.229 "data_size": 0 00:15:24.229 }, 00:15:24.229 { 00:15:24.229 "name": "BaseBdev3", 00:15:24.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.229 "is_configured": false, 00:15:24.229 "data_offset": 0, 00:15:24.229 "data_size": 0 00:15:24.229 }, 00:15:24.229 { 00:15:24.229 "name": "BaseBdev4", 00:15:24.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.229 "is_configured": false, 00:15:24.229 "data_offset": 0, 00:15:24.229 "data_size": 0 00:15:24.229 } 00:15:24.229 ] 00:15:24.229 }' 00:15:24.229 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.229 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.795 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.795 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.795 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.795 [2024-11-20 11:27:32.359582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.795 [2024-11-20 11:27:32.359662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:24.795 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.795 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.795 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.795 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.795 [2024-11-20 11:27:32.371679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.795 [2024-11-20 11:27:32.374288] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.795 [2024-11-20 11:27:32.374477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.796 [2024-11-20 11:27:32.374598] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.796 [2024-11-20 11:27:32.374697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.796 [2024-11-20 11:27:32.374809] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.796 [2024-11-20 11:27:32.374869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.796 "name": "Existed_Raid", 00:15:24.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.796 "strip_size_kb": 0, 00:15:24.796 "state": "configuring", 00:15:24.796 "raid_level": "raid1", 00:15:24.796 "superblock": false, 00:15:24.796 "num_base_bdevs": 4, 00:15:24.796 "num_base_bdevs_discovered": 1, 00:15:24.796 "num_base_bdevs_operational": 4, 00:15:24.796 "base_bdevs_list": [ 00:15:24.796 { 00:15:24.796 "name": "BaseBdev1", 00:15:24.796 "uuid": "8633526a-3e59-4c29-935c-21cc3d5d9eef", 00:15:24.796 "is_configured": true, 00:15:24.796 "data_offset": 0, 00:15:24.796 "data_size": 65536 00:15:24.796 }, 00:15:24.796 { 00:15:24.796 "name": "BaseBdev2", 00:15:24.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.796 "is_configured": false, 00:15:24.796 "data_offset": 0, 00:15:24.796 "data_size": 0 00:15:24.796 }, 00:15:24.796 { 00:15:24.796 "name": "BaseBdev3", 00:15:24.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.796 "is_configured": false, 00:15:24.796 "data_offset": 0, 00:15:24.796 "data_size": 0 00:15:24.796 }, 00:15:24.796 { 00:15:24.796 "name": "BaseBdev4", 00:15:24.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.796 "is_configured": false, 00:15:24.796 "data_offset": 0, 00:15:24.796 "data_size": 0 00:15:24.796 } 00:15:24.796 ] 00:15:24.796 }' 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.796 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.361 [2024-11-20 11:27:32.958201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.361 BaseBdev2 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.361 [ 00:15:25.361 { 00:15:25.361 "name": "BaseBdev2", 00:15:25.361 "aliases": [ 00:15:25.361 "e5f488ac-7114-4074-84e8-8184789b3179" 00:15:25.361 ], 00:15:25.361 "product_name": "Malloc disk", 00:15:25.361 "block_size": 512, 00:15:25.361 "num_blocks": 65536, 00:15:25.361 "uuid": "e5f488ac-7114-4074-84e8-8184789b3179", 00:15:25.361 "assigned_rate_limits": { 00:15:25.361 "rw_ios_per_sec": 0, 00:15:25.361 "rw_mbytes_per_sec": 0, 00:15:25.361 "r_mbytes_per_sec": 0, 00:15:25.361 "w_mbytes_per_sec": 0 00:15:25.361 }, 00:15:25.361 "claimed": true, 00:15:25.361 "claim_type": "exclusive_write", 00:15:25.361 "zoned": false, 00:15:25.361 "supported_io_types": { 00:15:25.361 "read": true, 00:15:25.361 "write": true, 00:15:25.361 "unmap": true, 00:15:25.361 "flush": true, 00:15:25.361 "reset": true, 00:15:25.361 "nvme_admin": false, 00:15:25.361 "nvme_io": false, 00:15:25.361 "nvme_io_md": false, 00:15:25.361 "write_zeroes": true, 00:15:25.361 "zcopy": true, 00:15:25.361 "get_zone_info": false, 00:15:25.361 "zone_management": false, 00:15:25.361 "zone_append": false, 00:15:25.361 "compare": false, 00:15:25.361 "compare_and_write": false, 00:15:25.361 "abort": true, 00:15:25.361 "seek_hole": false, 00:15:25.361 "seek_data": false, 00:15:25.361 "copy": true, 00:15:25.361 "nvme_iov_md": false 00:15:25.361 }, 00:15:25.361 "memory_domains": [ 00:15:25.361 { 00:15:25.361 "dma_device_id": "system", 00:15:25.361 "dma_device_type": 1 00:15:25.361 }, 00:15:25.361 { 00:15:25.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.361 "dma_device_type": 2 00:15:25.361 } 00:15:25.361 ], 00:15:25.361 "driver_specific": {} 00:15:25.361 } 00:15:25.361 ] 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.361 11:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.361 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.361 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.361 "name": "Existed_Raid", 00:15:25.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.361 "strip_size_kb": 0, 00:15:25.361 "state": "configuring", 00:15:25.361 "raid_level": "raid1", 00:15:25.361 "superblock": false, 00:15:25.361 "num_base_bdevs": 4, 00:15:25.361 "num_base_bdevs_discovered": 2, 00:15:25.361 "num_base_bdevs_operational": 4, 00:15:25.361 "base_bdevs_list": [ 00:15:25.361 { 00:15:25.361 "name": "BaseBdev1", 00:15:25.361 "uuid": "8633526a-3e59-4c29-935c-21cc3d5d9eef", 00:15:25.361 "is_configured": true, 00:15:25.361 "data_offset": 0, 00:15:25.361 "data_size": 65536 00:15:25.361 }, 00:15:25.361 { 00:15:25.361 "name": "BaseBdev2", 00:15:25.361 "uuid": "e5f488ac-7114-4074-84e8-8184789b3179", 00:15:25.361 "is_configured": true, 00:15:25.361 "data_offset": 0, 00:15:25.361 "data_size": 65536 00:15:25.361 }, 00:15:25.361 { 00:15:25.361 "name": "BaseBdev3", 00:15:25.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.361 "is_configured": false, 00:15:25.361 "data_offset": 0, 00:15:25.361 "data_size": 0 00:15:25.361 }, 00:15:25.361 { 00:15:25.361 "name": "BaseBdev4", 00:15:25.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.361 "is_configured": false, 00:15:25.361 "data_offset": 0, 00:15:25.361 "data_size": 0 00:15:25.361 } 00:15:25.361 ] 00:15:25.361 }' 00:15:25.361 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.361 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 [2024-11-20 11:27:33.576203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.925 BaseBdev3 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.925 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 [ 00:15:25.925 { 00:15:25.925 "name": "BaseBdev3", 00:15:25.925 "aliases": [ 00:15:25.925 "8cedc62e-32e3-4574-a618-60e96f90a0c5" 00:15:25.925 ], 00:15:25.925 "product_name": "Malloc disk", 00:15:25.926 "block_size": 512, 00:15:25.926 "num_blocks": 65536, 00:15:25.926 "uuid": "8cedc62e-32e3-4574-a618-60e96f90a0c5", 00:15:25.926 "assigned_rate_limits": { 00:15:25.926 "rw_ios_per_sec": 0, 00:15:25.926 "rw_mbytes_per_sec": 0, 00:15:25.926 "r_mbytes_per_sec": 0, 00:15:25.926 "w_mbytes_per_sec": 0 00:15:25.926 }, 00:15:25.926 "claimed": true, 00:15:25.926 "claim_type": "exclusive_write", 00:15:25.926 "zoned": false, 00:15:25.926 "supported_io_types": { 00:15:25.926 "read": true, 00:15:25.926 "write": true, 00:15:25.926 "unmap": true, 00:15:25.926 "flush": true, 00:15:25.926 "reset": true, 00:15:25.926 "nvme_admin": false, 00:15:25.926 "nvme_io": false, 00:15:25.926 "nvme_io_md": false, 00:15:25.926 "write_zeroes": true, 00:15:25.926 "zcopy": true, 00:15:25.926 "get_zone_info": false, 00:15:25.926 "zone_management": false, 00:15:25.926 "zone_append": false, 00:15:25.926 "compare": false, 00:15:25.926 "compare_and_write": false, 00:15:25.926 "abort": true, 00:15:25.926 "seek_hole": false, 00:15:25.926 "seek_data": false, 00:15:25.926 "copy": true, 00:15:25.926 "nvme_iov_md": false 00:15:25.926 }, 00:15:25.926 "memory_domains": [ 00:15:25.926 { 00:15:25.926 "dma_device_id": "system", 00:15:25.926 "dma_device_type": 1 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.926 "dma_device_type": 2 00:15:25.926 } 00:15:25.926 ], 00:15:25.926 "driver_specific": {} 00:15:25.926 } 00:15:25.926 ] 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.926 "name": "Existed_Raid", 00:15:25.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.926 "strip_size_kb": 0, 00:15:25.926 "state": "configuring", 00:15:25.926 "raid_level": "raid1", 00:15:25.926 "superblock": false, 00:15:25.926 "num_base_bdevs": 4, 00:15:25.926 "num_base_bdevs_discovered": 3, 00:15:25.926 "num_base_bdevs_operational": 4, 00:15:25.926 "base_bdevs_list": [ 00:15:25.926 { 00:15:25.926 "name": "BaseBdev1", 00:15:25.926 "uuid": "8633526a-3e59-4c29-935c-21cc3d5d9eef", 00:15:25.926 "is_configured": true, 00:15:25.926 "data_offset": 0, 00:15:25.926 "data_size": 65536 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "name": "BaseBdev2", 00:15:25.926 "uuid": "e5f488ac-7114-4074-84e8-8184789b3179", 00:15:25.926 "is_configured": true, 00:15:25.926 "data_offset": 0, 00:15:25.926 "data_size": 65536 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "name": "BaseBdev3", 00:15:25.926 "uuid": "8cedc62e-32e3-4574-a618-60e96f90a0c5", 00:15:25.926 "is_configured": true, 00:15:25.926 "data_offset": 0, 00:15:25.926 "data_size": 65536 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "name": "BaseBdev4", 00:15:25.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.926 "is_configured": false, 00:15:25.926 "data_offset": 0, 00:15:25.926 "data_size": 0 00:15:25.926 } 00:15:25.926 ] 00:15:25.926 }' 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.926 11:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.492 [2024-11-20 11:27:34.150787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.492 [2024-11-20 11:27:34.151062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:26.492 [2024-11-20 11:27:34.151087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:26.492 [2024-11-20 11:27:34.151454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:26.492 [2024-11-20 11:27:34.151719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:26.492 [2024-11-20 11:27:34.151748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:26.492 [2024-11-20 11:27:34.152062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.492 BaseBdev4 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.492 [ 00:15:26.492 { 00:15:26.492 "name": "BaseBdev4", 00:15:26.492 "aliases": [ 00:15:26.492 "58e52538-6f04-4ab6-ba3e-133d2ce5f79b" 00:15:26.492 ], 00:15:26.492 "product_name": "Malloc disk", 00:15:26.492 "block_size": 512, 00:15:26.492 "num_blocks": 65536, 00:15:26.492 "uuid": "58e52538-6f04-4ab6-ba3e-133d2ce5f79b", 00:15:26.492 "assigned_rate_limits": { 00:15:26.492 "rw_ios_per_sec": 0, 00:15:26.492 "rw_mbytes_per_sec": 0, 00:15:26.492 "r_mbytes_per_sec": 0, 00:15:26.492 "w_mbytes_per_sec": 0 00:15:26.492 }, 00:15:26.492 "claimed": true, 00:15:26.492 "claim_type": "exclusive_write", 00:15:26.492 "zoned": false, 00:15:26.492 "supported_io_types": { 00:15:26.492 "read": true, 00:15:26.492 "write": true, 00:15:26.492 "unmap": true, 00:15:26.492 "flush": true, 00:15:26.492 "reset": true, 00:15:26.492 "nvme_admin": false, 00:15:26.492 "nvme_io": false, 00:15:26.492 "nvme_io_md": false, 00:15:26.492 "write_zeroes": true, 00:15:26.492 "zcopy": true, 00:15:26.492 "get_zone_info": false, 00:15:26.492 "zone_management": false, 00:15:26.492 "zone_append": false, 00:15:26.492 "compare": false, 00:15:26.492 "compare_and_write": false, 00:15:26.492 "abort": true, 00:15:26.492 "seek_hole": false, 00:15:26.492 "seek_data": false, 00:15:26.492 "copy": true, 00:15:26.492 "nvme_iov_md": false 00:15:26.492 }, 00:15:26.492 "memory_domains": [ 00:15:26.492 { 00:15:26.492 "dma_device_id": "system", 00:15:26.492 "dma_device_type": 1 00:15:26.492 }, 00:15:26.492 { 00:15:26.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.492 "dma_device_type": 2 00:15:26.492 } 00:15:26.492 ], 00:15:26.492 "driver_specific": {} 00:15:26.492 } 00:15:26.492 ] 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.492 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.493 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.493 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.493 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.493 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.493 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.493 "name": "Existed_Raid", 00:15:26.493 "uuid": "df2c310a-51ee-489f-a428-3680b0d6f31e", 00:15:26.493 "strip_size_kb": 0, 00:15:26.493 "state": "online", 00:15:26.493 "raid_level": "raid1", 00:15:26.493 "superblock": false, 00:15:26.493 "num_base_bdevs": 4, 00:15:26.493 "num_base_bdevs_discovered": 4, 00:15:26.493 "num_base_bdevs_operational": 4, 00:15:26.493 "base_bdevs_list": [ 00:15:26.493 { 00:15:26.493 "name": "BaseBdev1", 00:15:26.493 "uuid": "8633526a-3e59-4c29-935c-21cc3d5d9eef", 00:15:26.493 "is_configured": true, 00:15:26.493 "data_offset": 0, 00:15:26.493 "data_size": 65536 00:15:26.493 }, 00:15:26.493 { 00:15:26.493 "name": "BaseBdev2", 00:15:26.493 "uuid": "e5f488ac-7114-4074-84e8-8184789b3179", 00:15:26.493 "is_configured": true, 00:15:26.493 "data_offset": 0, 00:15:26.493 "data_size": 65536 00:15:26.493 }, 00:15:26.493 { 00:15:26.493 "name": "BaseBdev3", 00:15:26.493 "uuid": "8cedc62e-32e3-4574-a618-60e96f90a0c5", 00:15:26.493 "is_configured": true, 00:15:26.493 "data_offset": 0, 00:15:26.493 "data_size": 65536 00:15:26.493 }, 00:15:26.493 { 00:15:26.493 "name": "BaseBdev4", 00:15:26.493 "uuid": "58e52538-6f04-4ab6-ba3e-133d2ce5f79b", 00:15:26.493 "is_configured": true, 00:15:26.493 "data_offset": 0, 00:15:26.493 "data_size": 65536 00:15:26.493 } 00:15:26.493 ] 00:15:26.493 }' 00:15:26.493 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.493 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.059 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.059 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.060 [2024-11-20 11:27:34.731432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.060 "name": "Existed_Raid", 00:15:27.060 "aliases": [ 00:15:27.060 "df2c310a-51ee-489f-a428-3680b0d6f31e" 00:15:27.060 ], 00:15:27.060 "product_name": "Raid Volume", 00:15:27.060 "block_size": 512, 00:15:27.060 "num_blocks": 65536, 00:15:27.060 "uuid": "df2c310a-51ee-489f-a428-3680b0d6f31e", 00:15:27.060 "assigned_rate_limits": { 00:15:27.060 "rw_ios_per_sec": 0, 00:15:27.060 "rw_mbytes_per_sec": 0, 00:15:27.060 "r_mbytes_per_sec": 0, 00:15:27.060 "w_mbytes_per_sec": 0 00:15:27.060 }, 00:15:27.060 "claimed": false, 00:15:27.060 "zoned": false, 00:15:27.060 "supported_io_types": { 00:15:27.060 "read": true, 00:15:27.060 "write": true, 00:15:27.060 "unmap": false, 00:15:27.060 "flush": false, 00:15:27.060 "reset": true, 00:15:27.060 "nvme_admin": false, 00:15:27.060 "nvme_io": false, 00:15:27.060 "nvme_io_md": false, 00:15:27.060 "write_zeroes": true, 00:15:27.060 "zcopy": false, 00:15:27.060 "get_zone_info": false, 00:15:27.060 "zone_management": false, 00:15:27.060 "zone_append": false, 00:15:27.060 "compare": false, 00:15:27.060 "compare_and_write": false, 00:15:27.060 "abort": false, 00:15:27.060 "seek_hole": false, 00:15:27.060 "seek_data": false, 00:15:27.060 "copy": false, 00:15:27.060 "nvme_iov_md": false 00:15:27.060 }, 00:15:27.060 "memory_domains": [ 00:15:27.060 { 00:15:27.060 "dma_device_id": "system", 00:15:27.060 "dma_device_type": 1 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.060 "dma_device_type": 2 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "dma_device_id": "system", 00:15:27.060 "dma_device_type": 1 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.060 "dma_device_type": 2 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "dma_device_id": "system", 00:15:27.060 "dma_device_type": 1 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.060 "dma_device_type": 2 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "dma_device_id": "system", 00:15:27.060 "dma_device_type": 1 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.060 "dma_device_type": 2 00:15:27.060 } 00:15:27.060 ], 00:15:27.060 "driver_specific": { 00:15:27.060 "raid": { 00:15:27.060 "uuid": "df2c310a-51ee-489f-a428-3680b0d6f31e", 00:15:27.060 "strip_size_kb": 0, 00:15:27.060 "state": "online", 00:15:27.060 "raid_level": "raid1", 00:15:27.060 "superblock": false, 00:15:27.060 "num_base_bdevs": 4, 00:15:27.060 "num_base_bdevs_discovered": 4, 00:15:27.060 "num_base_bdevs_operational": 4, 00:15:27.060 "base_bdevs_list": [ 00:15:27.060 { 00:15:27.060 "name": "BaseBdev1", 00:15:27.060 "uuid": "8633526a-3e59-4c29-935c-21cc3d5d9eef", 00:15:27.060 "is_configured": true, 00:15:27.060 "data_offset": 0, 00:15:27.060 "data_size": 65536 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "name": "BaseBdev2", 00:15:27.060 "uuid": "e5f488ac-7114-4074-84e8-8184789b3179", 00:15:27.060 "is_configured": true, 00:15:27.060 "data_offset": 0, 00:15:27.060 "data_size": 65536 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "name": "BaseBdev3", 00:15:27.060 "uuid": "8cedc62e-32e3-4574-a618-60e96f90a0c5", 00:15:27.060 "is_configured": true, 00:15:27.060 "data_offset": 0, 00:15:27.060 "data_size": 65536 00:15:27.060 }, 00:15:27.060 { 00:15:27.060 "name": "BaseBdev4", 00:15:27.060 "uuid": "58e52538-6f04-4ab6-ba3e-133d2ce5f79b", 00:15:27.060 "is_configured": true, 00:15:27.060 "data_offset": 0, 00:15:27.060 "data_size": 65536 00:15:27.060 } 00:15:27.060 ] 00:15:27.060 } 00:15:27.060 } 00:15:27.060 }' 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:27.060 BaseBdev2 00:15:27.060 BaseBdev3 00:15:27.060 BaseBdev4' 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.060 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.319 11:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.319 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.320 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.320 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.320 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.320 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.320 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.320 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.320 [2024-11-20 11:27:35.119216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.579 "name": "Existed_Raid", 00:15:27.579 "uuid": "df2c310a-51ee-489f-a428-3680b0d6f31e", 00:15:27.579 "strip_size_kb": 0, 00:15:27.579 "state": "online", 00:15:27.579 "raid_level": "raid1", 00:15:27.579 "superblock": false, 00:15:27.579 "num_base_bdevs": 4, 00:15:27.579 "num_base_bdevs_discovered": 3, 00:15:27.579 "num_base_bdevs_operational": 3, 00:15:27.579 "base_bdevs_list": [ 00:15:27.579 { 00:15:27.579 "name": null, 00:15:27.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.579 "is_configured": false, 00:15:27.579 "data_offset": 0, 00:15:27.579 "data_size": 65536 00:15:27.579 }, 00:15:27.579 { 00:15:27.579 "name": "BaseBdev2", 00:15:27.579 "uuid": "e5f488ac-7114-4074-84e8-8184789b3179", 00:15:27.579 "is_configured": true, 00:15:27.579 "data_offset": 0, 00:15:27.579 "data_size": 65536 00:15:27.579 }, 00:15:27.579 { 00:15:27.579 "name": "BaseBdev3", 00:15:27.579 "uuid": "8cedc62e-32e3-4574-a618-60e96f90a0c5", 00:15:27.579 "is_configured": true, 00:15:27.579 "data_offset": 0, 00:15:27.579 "data_size": 65536 00:15:27.579 }, 00:15:27.579 { 00:15:27.579 "name": "BaseBdev4", 00:15:27.579 "uuid": "58e52538-6f04-4ab6-ba3e-133d2ce5f79b", 00:15:27.579 "is_configured": true, 00:15:27.579 "data_offset": 0, 00:15:27.579 "data_size": 65536 00:15:27.579 } 00:15:27.579 ] 00:15:27.579 }' 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.579 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.145 [2024-11-20 11:27:35.784941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.145 11:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.145 [2024-11-20 11:27:35.925221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.403 [2024-11-20 11:27:36.076495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:28.403 [2024-11-20 11:27:36.076777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.403 [2024-11-20 11:27:36.164083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.403 [2024-11-20 11:27:36.164158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.403 [2024-11-20 11:27:36.164180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.403 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 BaseBdev2 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 [ 00:15:28.662 { 00:15:28.662 "name": "BaseBdev2", 00:15:28.662 "aliases": [ 00:15:28.662 "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e" 00:15:28.662 ], 00:15:28.662 "product_name": "Malloc disk", 00:15:28.662 "block_size": 512, 00:15:28.662 "num_blocks": 65536, 00:15:28.662 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:28.662 "assigned_rate_limits": { 00:15:28.662 "rw_ios_per_sec": 0, 00:15:28.662 "rw_mbytes_per_sec": 0, 00:15:28.662 "r_mbytes_per_sec": 0, 00:15:28.662 "w_mbytes_per_sec": 0 00:15:28.662 }, 00:15:28.662 "claimed": false, 00:15:28.662 "zoned": false, 00:15:28.662 "supported_io_types": { 00:15:28.662 "read": true, 00:15:28.662 "write": true, 00:15:28.662 "unmap": true, 00:15:28.662 "flush": true, 00:15:28.662 "reset": true, 00:15:28.662 "nvme_admin": false, 00:15:28.662 "nvme_io": false, 00:15:28.662 "nvme_io_md": false, 00:15:28.662 "write_zeroes": true, 00:15:28.662 "zcopy": true, 00:15:28.662 "get_zone_info": false, 00:15:28.662 "zone_management": false, 00:15:28.662 "zone_append": false, 00:15:28.662 "compare": false, 00:15:28.662 "compare_and_write": false, 00:15:28.662 "abort": true, 00:15:28.662 "seek_hole": false, 00:15:28.662 "seek_data": false, 00:15:28.662 "copy": true, 00:15:28.662 "nvme_iov_md": false 00:15:28.662 }, 00:15:28.662 "memory_domains": [ 00:15:28.662 { 00:15:28.662 "dma_device_id": "system", 00:15:28.662 "dma_device_type": 1 00:15:28.662 }, 00:15:28.662 { 00:15:28.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.662 "dma_device_type": 2 00:15:28.662 } 00:15:28.662 ], 00:15:28.662 "driver_specific": {} 00:15:28.662 } 00:15:28.662 ] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 BaseBdev3 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 [ 00:15:28.662 { 00:15:28.662 "name": "BaseBdev3", 00:15:28.662 "aliases": [ 00:15:28.662 "e3f2916b-e884-4721-92d4-21146c6f061e" 00:15:28.662 ], 00:15:28.662 "product_name": "Malloc disk", 00:15:28.662 "block_size": 512, 00:15:28.662 "num_blocks": 65536, 00:15:28.662 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:28.662 "assigned_rate_limits": { 00:15:28.662 "rw_ios_per_sec": 0, 00:15:28.662 "rw_mbytes_per_sec": 0, 00:15:28.662 "r_mbytes_per_sec": 0, 00:15:28.662 "w_mbytes_per_sec": 0 00:15:28.662 }, 00:15:28.662 "claimed": false, 00:15:28.662 "zoned": false, 00:15:28.662 "supported_io_types": { 00:15:28.662 "read": true, 00:15:28.662 "write": true, 00:15:28.662 "unmap": true, 00:15:28.662 "flush": true, 00:15:28.662 "reset": true, 00:15:28.662 "nvme_admin": false, 00:15:28.662 "nvme_io": false, 00:15:28.662 "nvme_io_md": false, 00:15:28.662 "write_zeroes": true, 00:15:28.662 "zcopy": true, 00:15:28.662 "get_zone_info": false, 00:15:28.662 "zone_management": false, 00:15:28.662 "zone_append": false, 00:15:28.662 "compare": false, 00:15:28.662 "compare_and_write": false, 00:15:28.662 "abort": true, 00:15:28.662 "seek_hole": false, 00:15:28.662 "seek_data": false, 00:15:28.662 "copy": true, 00:15:28.662 "nvme_iov_md": false 00:15:28.662 }, 00:15:28.662 "memory_domains": [ 00:15:28.662 { 00:15:28.662 "dma_device_id": "system", 00:15:28.662 "dma_device_type": 1 00:15:28.662 }, 00:15:28.662 { 00:15:28.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.662 "dma_device_type": 2 00:15:28.662 } 00:15:28.662 ], 00:15:28.662 "driver_specific": {} 00:15:28.662 } 00:15:28.662 ] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 BaseBdev4 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.662 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.662 [ 00:15:28.662 { 00:15:28.662 "name": "BaseBdev4", 00:15:28.662 "aliases": [ 00:15:28.662 "360b74b8-3b38-4b9b-8a2f-9e41d6b39454" 00:15:28.662 ], 00:15:28.662 "product_name": "Malloc disk", 00:15:28.663 "block_size": 512, 00:15:28.663 "num_blocks": 65536, 00:15:28.663 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:28.663 "assigned_rate_limits": { 00:15:28.663 "rw_ios_per_sec": 0, 00:15:28.663 "rw_mbytes_per_sec": 0, 00:15:28.663 "r_mbytes_per_sec": 0, 00:15:28.663 "w_mbytes_per_sec": 0 00:15:28.663 }, 00:15:28.663 "claimed": false, 00:15:28.663 "zoned": false, 00:15:28.663 "supported_io_types": { 00:15:28.663 "read": true, 00:15:28.663 "write": true, 00:15:28.663 "unmap": true, 00:15:28.663 "flush": true, 00:15:28.663 "reset": true, 00:15:28.663 "nvme_admin": false, 00:15:28.663 "nvme_io": false, 00:15:28.663 "nvme_io_md": false, 00:15:28.663 "write_zeroes": true, 00:15:28.663 "zcopy": true, 00:15:28.663 "get_zone_info": false, 00:15:28.663 "zone_management": false, 00:15:28.663 "zone_append": false, 00:15:28.663 "compare": false, 00:15:28.663 "compare_and_write": false, 00:15:28.663 "abort": true, 00:15:28.663 "seek_hole": false, 00:15:28.663 "seek_data": false, 00:15:28.663 "copy": true, 00:15:28.663 "nvme_iov_md": false 00:15:28.663 }, 00:15:28.663 "memory_domains": [ 00:15:28.663 { 00:15:28.663 "dma_device_id": "system", 00:15:28.663 "dma_device_type": 1 00:15:28.663 }, 00:15:28.663 { 00:15:28.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.663 "dma_device_type": 2 00:15:28.663 } 00:15:28.663 ], 00:15:28.663 "driver_specific": {} 00:15:28.663 } 00:15:28.663 ] 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.663 [2024-11-20 11:27:36.433134] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.663 [2024-11-20 11:27:36.433214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.663 [2024-11-20 11:27:36.433260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.663 [2024-11-20 11:27:36.436054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.663 [2024-11-20 11:27:36.436127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.663 "name": "Existed_Raid", 00:15:28.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.663 "strip_size_kb": 0, 00:15:28.663 "state": "configuring", 00:15:28.663 "raid_level": "raid1", 00:15:28.663 "superblock": false, 00:15:28.663 "num_base_bdevs": 4, 00:15:28.663 "num_base_bdevs_discovered": 3, 00:15:28.663 "num_base_bdevs_operational": 4, 00:15:28.663 "base_bdevs_list": [ 00:15:28.663 { 00:15:28.663 "name": "BaseBdev1", 00:15:28.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.663 "is_configured": false, 00:15:28.663 "data_offset": 0, 00:15:28.663 "data_size": 0 00:15:28.663 }, 00:15:28.663 { 00:15:28.663 "name": "BaseBdev2", 00:15:28.663 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:28.663 "is_configured": true, 00:15:28.663 "data_offset": 0, 00:15:28.663 "data_size": 65536 00:15:28.663 }, 00:15:28.663 { 00:15:28.663 "name": "BaseBdev3", 00:15:28.663 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:28.663 "is_configured": true, 00:15:28.663 "data_offset": 0, 00:15:28.663 "data_size": 65536 00:15:28.663 }, 00:15:28.663 { 00:15:28.663 "name": "BaseBdev4", 00:15:28.663 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:28.663 "is_configured": true, 00:15:28.663 "data_offset": 0, 00:15:28.663 "data_size": 65536 00:15:28.663 } 00:15:28.663 ] 00:15:28.663 }' 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.663 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.230 [2024-11-20 11:27:36.953362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.230 11:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.230 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.230 "name": "Existed_Raid", 00:15:29.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.230 "strip_size_kb": 0, 00:15:29.230 "state": "configuring", 00:15:29.230 "raid_level": "raid1", 00:15:29.230 "superblock": false, 00:15:29.230 "num_base_bdevs": 4, 00:15:29.230 "num_base_bdevs_discovered": 2, 00:15:29.230 "num_base_bdevs_operational": 4, 00:15:29.230 "base_bdevs_list": [ 00:15:29.230 { 00:15:29.230 "name": "BaseBdev1", 00:15:29.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.230 "is_configured": false, 00:15:29.230 "data_offset": 0, 00:15:29.230 "data_size": 0 00:15:29.230 }, 00:15:29.230 { 00:15:29.230 "name": null, 00:15:29.230 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:29.230 "is_configured": false, 00:15:29.230 "data_offset": 0, 00:15:29.230 "data_size": 65536 00:15:29.230 }, 00:15:29.230 { 00:15:29.230 "name": "BaseBdev3", 00:15:29.230 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:29.230 "is_configured": true, 00:15:29.230 "data_offset": 0, 00:15:29.230 "data_size": 65536 00:15:29.230 }, 00:15:29.230 { 00:15:29.231 "name": "BaseBdev4", 00:15:29.231 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:29.231 "is_configured": true, 00:15:29.231 "data_offset": 0, 00:15:29.231 "data_size": 65536 00:15:29.231 } 00:15:29.231 ] 00:15:29.231 }' 00:15:29.231 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.231 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 [2024-11-20 11:27:37.595769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.797 BaseBdev1 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 [ 00:15:29.797 { 00:15:29.797 "name": "BaseBdev1", 00:15:29.797 "aliases": [ 00:15:29.797 "d367a489-0867-4a60-9a1f-9fd91f37cdee" 00:15:29.797 ], 00:15:29.797 "product_name": "Malloc disk", 00:15:29.797 "block_size": 512, 00:15:29.797 "num_blocks": 65536, 00:15:29.797 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:29.797 "assigned_rate_limits": { 00:15:29.797 "rw_ios_per_sec": 0, 00:15:29.797 "rw_mbytes_per_sec": 0, 00:15:29.797 "r_mbytes_per_sec": 0, 00:15:29.797 "w_mbytes_per_sec": 0 00:15:29.797 }, 00:15:29.797 "claimed": true, 00:15:29.797 "claim_type": "exclusive_write", 00:15:29.797 "zoned": false, 00:15:29.797 "supported_io_types": { 00:15:29.797 "read": true, 00:15:29.797 "write": true, 00:15:29.797 "unmap": true, 00:15:29.797 "flush": true, 00:15:29.797 "reset": true, 00:15:29.797 "nvme_admin": false, 00:15:29.797 "nvme_io": false, 00:15:29.797 "nvme_io_md": false, 00:15:29.797 "write_zeroes": true, 00:15:29.797 "zcopy": true, 00:15:29.797 "get_zone_info": false, 00:15:29.797 "zone_management": false, 00:15:29.797 "zone_append": false, 00:15:29.797 "compare": false, 00:15:29.797 "compare_and_write": false, 00:15:29.797 "abort": true, 00:15:29.797 "seek_hole": false, 00:15:29.797 "seek_data": false, 00:15:29.797 "copy": true, 00:15:29.797 "nvme_iov_md": false 00:15:29.797 }, 00:15:29.797 "memory_domains": [ 00:15:29.797 { 00:15:29.797 "dma_device_id": "system", 00:15:29.797 "dma_device_type": 1 00:15:29.797 }, 00:15:29.797 { 00:15:29.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.797 "dma_device_type": 2 00:15:29.797 } 00:15:29.797 ], 00:15:29.797 "driver_specific": {} 00:15:29.797 } 00:15:29.797 ] 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.797 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.055 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.055 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.055 "name": "Existed_Raid", 00:15:30.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.055 "strip_size_kb": 0, 00:15:30.055 "state": "configuring", 00:15:30.055 "raid_level": "raid1", 00:15:30.055 "superblock": false, 00:15:30.055 "num_base_bdevs": 4, 00:15:30.055 "num_base_bdevs_discovered": 3, 00:15:30.055 "num_base_bdevs_operational": 4, 00:15:30.055 "base_bdevs_list": [ 00:15:30.055 { 00:15:30.055 "name": "BaseBdev1", 00:15:30.055 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:30.055 "is_configured": true, 00:15:30.055 "data_offset": 0, 00:15:30.055 "data_size": 65536 00:15:30.055 }, 00:15:30.055 { 00:15:30.055 "name": null, 00:15:30.055 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:30.055 "is_configured": false, 00:15:30.055 "data_offset": 0, 00:15:30.055 "data_size": 65536 00:15:30.055 }, 00:15:30.055 { 00:15:30.055 "name": "BaseBdev3", 00:15:30.055 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:30.055 "is_configured": true, 00:15:30.055 "data_offset": 0, 00:15:30.055 "data_size": 65536 00:15:30.055 }, 00:15:30.055 { 00:15:30.055 "name": "BaseBdev4", 00:15:30.055 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:30.055 "is_configured": true, 00:15:30.055 "data_offset": 0, 00:15:30.055 "data_size": 65536 00:15:30.055 } 00:15:30.055 ] 00:15:30.055 }' 00:15:30.055 11:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.055 11:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.313 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.313 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.313 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.313 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.571 [2024-11-20 11:27:38.200015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.571 "name": "Existed_Raid", 00:15:30.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.571 "strip_size_kb": 0, 00:15:30.571 "state": "configuring", 00:15:30.571 "raid_level": "raid1", 00:15:30.571 "superblock": false, 00:15:30.571 "num_base_bdevs": 4, 00:15:30.571 "num_base_bdevs_discovered": 2, 00:15:30.571 "num_base_bdevs_operational": 4, 00:15:30.571 "base_bdevs_list": [ 00:15:30.571 { 00:15:30.571 "name": "BaseBdev1", 00:15:30.571 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:30.571 "is_configured": true, 00:15:30.571 "data_offset": 0, 00:15:30.571 "data_size": 65536 00:15:30.571 }, 00:15:30.571 { 00:15:30.571 "name": null, 00:15:30.571 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:30.571 "is_configured": false, 00:15:30.571 "data_offset": 0, 00:15:30.571 "data_size": 65536 00:15:30.571 }, 00:15:30.571 { 00:15:30.571 "name": null, 00:15:30.571 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:30.571 "is_configured": false, 00:15:30.571 "data_offset": 0, 00:15:30.571 "data_size": 65536 00:15:30.571 }, 00:15:30.571 { 00:15:30.571 "name": "BaseBdev4", 00:15:30.571 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:30.571 "is_configured": true, 00:15:30.571 "data_offset": 0, 00:15:30.571 "data_size": 65536 00:15:30.571 } 00:15:30.571 ] 00:15:30.571 }' 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.571 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.210 [2024-11-20 11:27:38.808450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.210 "name": "Existed_Raid", 00:15:31.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.210 "strip_size_kb": 0, 00:15:31.210 "state": "configuring", 00:15:31.210 "raid_level": "raid1", 00:15:31.210 "superblock": false, 00:15:31.210 "num_base_bdevs": 4, 00:15:31.210 "num_base_bdevs_discovered": 3, 00:15:31.210 "num_base_bdevs_operational": 4, 00:15:31.210 "base_bdevs_list": [ 00:15:31.210 { 00:15:31.210 "name": "BaseBdev1", 00:15:31.210 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:31.210 "is_configured": true, 00:15:31.210 "data_offset": 0, 00:15:31.210 "data_size": 65536 00:15:31.210 }, 00:15:31.210 { 00:15:31.210 "name": null, 00:15:31.210 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:31.210 "is_configured": false, 00:15:31.210 "data_offset": 0, 00:15:31.210 "data_size": 65536 00:15:31.210 }, 00:15:31.210 { 00:15:31.210 "name": "BaseBdev3", 00:15:31.210 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:31.210 "is_configured": true, 00:15:31.210 "data_offset": 0, 00:15:31.210 "data_size": 65536 00:15:31.210 }, 00:15:31.210 { 00:15:31.210 "name": "BaseBdev4", 00:15:31.210 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:31.210 "is_configured": true, 00:15:31.210 "data_offset": 0, 00:15:31.210 "data_size": 65536 00:15:31.210 } 00:15:31.210 ] 00:15:31.210 }' 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.210 11:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 [2024-11-20 11:27:39.388694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.778 "name": "Existed_Raid", 00:15:31.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.778 "strip_size_kb": 0, 00:15:31.778 "state": "configuring", 00:15:31.778 "raid_level": "raid1", 00:15:31.778 "superblock": false, 00:15:31.778 "num_base_bdevs": 4, 00:15:31.778 "num_base_bdevs_discovered": 2, 00:15:31.778 "num_base_bdevs_operational": 4, 00:15:31.778 "base_bdevs_list": [ 00:15:31.778 { 00:15:31.778 "name": null, 00:15:31.778 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:31.778 "is_configured": false, 00:15:31.778 "data_offset": 0, 00:15:31.778 "data_size": 65536 00:15:31.778 }, 00:15:31.778 { 00:15:31.778 "name": null, 00:15:31.778 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:31.778 "is_configured": false, 00:15:31.778 "data_offset": 0, 00:15:31.778 "data_size": 65536 00:15:31.778 }, 00:15:31.778 { 00:15:31.778 "name": "BaseBdev3", 00:15:31.778 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:31.778 "is_configured": true, 00:15:31.778 "data_offset": 0, 00:15:31.778 "data_size": 65536 00:15:31.778 }, 00:15:31.778 { 00:15:31.778 "name": "BaseBdev4", 00:15:31.778 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:31.778 "is_configured": true, 00:15:31.778 "data_offset": 0, 00:15:31.778 "data_size": 65536 00:15:31.778 } 00:15:31.778 ] 00:15:31.778 }' 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.778 11:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.344 [2024-11-20 11:27:40.062530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.344 "name": "Existed_Raid", 00:15:32.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.344 "strip_size_kb": 0, 00:15:32.344 "state": "configuring", 00:15:32.344 "raid_level": "raid1", 00:15:32.344 "superblock": false, 00:15:32.344 "num_base_bdevs": 4, 00:15:32.344 "num_base_bdevs_discovered": 3, 00:15:32.344 "num_base_bdevs_operational": 4, 00:15:32.344 "base_bdevs_list": [ 00:15:32.344 { 00:15:32.344 "name": null, 00:15:32.344 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:32.344 "is_configured": false, 00:15:32.344 "data_offset": 0, 00:15:32.344 "data_size": 65536 00:15:32.344 }, 00:15:32.344 { 00:15:32.344 "name": "BaseBdev2", 00:15:32.344 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:32.344 "is_configured": true, 00:15:32.344 "data_offset": 0, 00:15:32.344 "data_size": 65536 00:15:32.344 }, 00:15:32.344 { 00:15:32.344 "name": "BaseBdev3", 00:15:32.344 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:32.344 "is_configured": true, 00:15:32.344 "data_offset": 0, 00:15:32.344 "data_size": 65536 00:15:32.344 }, 00:15:32.344 { 00:15:32.344 "name": "BaseBdev4", 00:15:32.344 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:32.344 "is_configured": true, 00:15:32.344 "data_offset": 0, 00:15:32.344 "data_size": 65536 00:15:32.344 } 00:15:32.344 ] 00:15:32.344 }' 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.344 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d367a489-0867-4a60-9a1f-9fd91f37cdee 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.909 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.909 [2024-11-20 11:27:40.728344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.910 [2024-11-20 11:27:40.728664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.910 [2024-11-20 11:27:40.728698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:32.910 [2024-11-20 11:27:40.729037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:32.910 [2024-11-20 11:27:40.729287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.910 [2024-11-20 11:27:40.729306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:32.910 [2024-11-20 11:27:40.729792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.910 NewBaseBdev 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.910 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.910 [ 00:15:32.910 { 00:15:32.910 "name": "NewBaseBdev", 00:15:32.910 "aliases": [ 00:15:32.910 "d367a489-0867-4a60-9a1f-9fd91f37cdee" 00:15:32.910 ], 00:15:32.910 "product_name": "Malloc disk", 00:15:32.910 "block_size": 512, 00:15:32.910 "num_blocks": 65536, 00:15:32.910 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:32.910 "assigned_rate_limits": { 00:15:32.910 "rw_ios_per_sec": 0, 00:15:32.910 "rw_mbytes_per_sec": 0, 00:15:32.910 "r_mbytes_per_sec": 0, 00:15:32.910 "w_mbytes_per_sec": 0 00:15:32.910 }, 00:15:32.910 "claimed": true, 00:15:32.910 "claim_type": "exclusive_write", 00:15:32.910 "zoned": false, 00:15:32.910 "supported_io_types": { 00:15:32.910 "read": true, 00:15:32.910 "write": true, 00:15:32.910 "unmap": true, 00:15:32.910 "flush": true, 00:15:32.910 "reset": true, 00:15:32.910 "nvme_admin": false, 00:15:32.910 "nvme_io": false, 00:15:32.910 "nvme_io_md": false, 00:15:32.910 "write_zeroes": true, 00:15:33.169 "zcopy": true, 00:15:33.169 "get_zone_info": false, 00:15:33.169 "zone_management": false, 00:15:33.169 "zone_append": false, 00:15:33.169 "compare": false, 00:15:33.169 "compare_and_write": false, 00:15:33.169 "abort": true, 00:15:33.169 "seek_hole": false, 00:15:33.169 "seek_data": false, 00:15:33.169 "copy": true, 00:15:33.169 "nvme_iov_md": false 00:15:33.169 }, 00:15:33.169 "memory_domains": [ 00:15:33.169 { 00:15:33.169 "dma_device_id": "system", 00:15:33.169 "dma_device_type": 1 00:15:33.169 }, 00:15:33.169 { 00:15:33.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.169 "dma_device_type": 2 00:15:33.169 } 00:15:33.169 ], 00:15:33.169 "driver_specific": {} 00:15:33.169 } 00:15:33.169 ] 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.169 "name": "Existed_Raid", 00:15:33.169 "uuid": "cc52190d-9c71-4671-bee3-a5df52ac6b2c", 00:15:33.169 "strip_size_kb": 0, 00:15:33.169 "state": "online", 00:15:33.169 "raid_level": "raid1", 00:15:33.169 "superblock": false, 00:15:33.169 "num_base_bdevs": 4, 00:15:33.169 "num_base_bdevs_discovered": 4, 00:15:33.169 "num_base_bdevs_operational": 4, 00:15:33.169 "base_bdevs_list": [ 00:15:33.169 { 00:15:33.169 "name": "NewBaseBdev", 00:15:33.169 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:33.169 "is_configured": true, 00:15:33.169 "data_offset": 0, 00:15:33.169 "data_size": 65536 00:15:33.169 }, 00:15:33.169 { 00:15:33.169 "name": "BaseBdev2", 00:15:33.169 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:33.169 "is_configured": true, 00:15:33.169 "data_offset": 0, 00:15:33.169 "data_size": 65536 00:15:33.169 }, 00:15:33.169 { 00:15:33.169 "name": "BaseBdev3", 00:15:33.169 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:33.169 "is_configured": true, 00:15:33.169 "data_offset": 0, 00:15:33.169 "data_size": 65536 00:15:33.169 }, 00:15:33.169 { 00:15:33.169 "name": "BaseBdev4", 00:15:33.169 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:33.169 "is_configured": true, 00:15:33.169 "data_offset": 0, 00:15:33.169 "data_size": 65536 00:15:33.169 } 00:15:33.169 ] 00:15:33.169 }' 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.169 11:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 [2024-11-20 11:27:41.305008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.738 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.738 "name": "Existed_Raid", 00:15:33.738 "aliases": [ 00:15:33.738 "cc52190d-9c71-4671-bee3-a5df52ac6b2c" 00:15:33.738 ], 00:15:33.738 "product_name": "Raid Volume", 00:15:33.738 "block_size": 512, 00:15:33.738 "num_blocks": 65536, 00:15:33.738 "uuid": "cc52190d-9c71-4671-bee3-a5df52ac6b2c", 00:15:33.738 "assigned_rate_limits": { 00:15:33.738 "rw_ios_per_sec": 0, 00:15:33.738 "rw_mbytes_per_sec": 0, 00:15:33.738 "r_mbytes_per_sec": 0, 00:15:33.738 "w_mbytes_per_sec": 0 00:15:33.738 }, 00:15:33.738 "claimed": false, 00:15:33.738 "zoned": false, 00:15:33.738 "supported_io_types": { 00:15:33.738 "read": true, 00:15:33.738 "write": true, 00:15:33.738 "unmap": false, 00:15:33.738 "flush": false, 00:15:33.738 "reset": true, 00:15:33.738 "nvme_admin": false, 00:15:33.738 "nvme_io": false, 00:15:33.738 "nvme_io_md": false, 00:15:33.738 "write_zeroes": true, 00:15:33.738 "zcopy": false, 00:15:33.738 "get_zone_info": false, 00:15:33.738 "zone_management": false, 00:15:33.738 "zone_append": false, 00:15:33.738 "compare": false, 00:15:33.738 "compare_and_write": false, 00:15:33.738 "abort": false, 00:15:33.738 "seek_hole": false, 00:15:33.738 "seek_data": false, 00:15:33.738 "copy": false, 00:15:33.738 "nvme_iov_md": false 00:15:33.738 }, 00:15:33.738 "memory_domains": [ 00:15:33.738 { 00:15:33.738 "dma_device_id": "system", 00:15:33.738 "dma_device_type": 1 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.738 "dma_device_type": 2 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "dma_device_id": "system", 00:15:33.738 "dma_device_type": 1 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.738 "dma_device_type": 2 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "dma_device_id": "system", 00:15:33.738 "dma_device_type": 1 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.738 "dma_device_type": 2 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "dma_device_id": "system", 00:15:33.738 "dma_device_type": 1 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.738 "dma_device_type": 2 00:15:33.738 } 00:15:33.738 ], 00:15:33.738 "driver_specific": { 00:15:33.738 "raid": { 00:15:33.738 "uuid": "cc52190d-9c71-4671-bee3-a5df52ac6b2c", 00:15:33.738 "strip_size_kb": 0, 00:15:33.738 "state": "online", 00:15:33.738 "raid_level": "raid1", 00:15:33.738 "superblock": false, 00:15:33.738 "num_base_bdevs": 4, 00:15:33.738 "num_base_bdevs_discovered": 4, 00:15:33.738 "num_base_bdevs_operational": 4, 00:15:33.738 "base_bdevs_list": [ 00:15:33.738 { 00:15:33.738 "name": "NewBaseBdev", 00:15:33.738 "uuid": "d367a489-0867-4a60-9a1f-9fd91f37cdee", 00:15:33.738 "is_configured": true, 00:15:33.738 "data_offset": 0, 00:15:33.738 "data_size": 65536 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "name": "BaseBdev2", 00:15:33.738 "uuid": "c1e9f8c0-e937-45fa-b7ba-bb95735ca15e", 00:15:33.738 "is_configured": true, 00:15:33.738 "data_offset": 0, 00:15:33.738 "data_size": 65536 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "name": "BaseBdev3", 00:15:33.738 "uuid": "e3f2916b-e884-4721-92d4-21146c6f061e", 00:15:33.738 "is_configured": true, 00:15:33.738 "data_offset": 0, 00:15:33.738 "data_size": 65536 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "name": "BaseBdev4", 00:15:33.738 "uuid": "360b74b8-3b38-4b9b-8a2f-9e41d6b39454", 00:15:33.738 "is_configured": true, 00:15:33.739 "data_offset": 0, 00:15:33.739 "data_size": 65536 00:15:33.739 } 00:15:33.739 ] 00:15:33.739 } 00:15:33.739 } 00:15:33.739 }' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:33.739 BaseBdev2 00:15:33.739 BaseBdev3 00:15:33.739 BaseBdev4' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.739 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.002 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.002 [2024-11-20 11:27:41.668673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.002 [2024-11-20 11:27:41.668843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.002 [2024-11-20 11:27:41.669087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.002 [2024-11-20 11:27:41.669609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.003 [2024-11-20 11:27:41.669787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73235 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73235 ']' 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73235 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73235 00:15:34.003 killing process with pid 73235 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73235' 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73235 00:15:34.003 [2024-11-20 11:27:41.706481] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.003 11:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73235 00:15:34.262 [2024-11-20 11:27:42.066791] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:35.640 00:15:35.640 real 0m12.975s 00:15:35.640 user 0m21.650s 00:15:35.640 sys 0m1.759s 00:15:35.640 ************************************ 00:15:35.640 END TEST raid_state_function_test 00:15:35.640 ************************************ 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.640 11:27:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:35.640 11:27:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:35.640 11:27:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.640 11:27:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.640 ************************************ 00:15:35.640 START TEST raid_state_function_test_sb 00:15:35.640 ************************************ 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73923 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73923' 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:35.640 Process raid pid: 73923 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73923 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73923 ']' 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.640 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.640 [2024-11-20 11:27:43.283018] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:15:35.640 [2024-11-20 11:27:43.283202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.640 [2024-11-20 11:27:43.462806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.899 [2024-11-20 11:27:43.593110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.157 [2024-11-20 11:27:43.800013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.157 [2024-11-20 11:27:43.800210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.416 [2024-11-20 11:27:44.206837] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.416 [2024-11-20 11:27:44.207061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.416 [2024-11-20 11:27:44.207188] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.416 [2024-11-20 11:27:44.207224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.416 [2024-11-20 11:27:44.207237] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.416 [2024-11-20 11:27:44.207253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.416 [2024-11-20 11:27:44.207263] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:36.416 [2024-11-20 11:27:44.207277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.416 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.694 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.694 "name": "Existed_Raid", 00:15:36.694 "uuid": "3967dd56-0160-4f95-b465-7a50661ac08e", 00:15:36.694 "strip_size_kb": 0, 00:15:36.694 "state": "configuring", 00:15:36.694 "raid_level": "raid1", 00:15:36.694 "superblock": true, 00:15:36.694 "num_base_bdevs": 4, 00:15:36.694 "num_base_bdevs_discovered": 0, 00:15:36.694 "num_base_bdevs_operational": 4, 00:15:36.694 "base_bdevs_list": [ 00:15:36.694 { 00:15:36.694 "name": "BaseBdev1", 00:15:36.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.694 "is_configured": false, 00:15:36.694 "data_offset": 0, 00:15:36.694 "data_size": 0 00:15:36.694 }, 00:15:36.694 { 00:15:36.694 "name": "BaseBdev2", 00:15:36.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.694 "is_configured": false, 00:15:36.694 "data_offset": 0, 00:15:36.694 "data_size": 0 00:15:36.694 }, 00:15:36.694 { 00:15:36.694 "name": "BaseBdev3", 00:15:36.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.694 "is_configured": false, 00:15:36.694 "data_offset": 0, 00:15:36.694 "data_size": 0 00:15:36.694 }, 00:15:36.694 { 00:15:36.694 "name": "BaseBdev4", 00:15:36.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.694 "is_configured": false, 00:15:36.694 "data_offset": 0, 00:15:36.694 "data_size": 0 00:15:36.694 } 00:15:36.694 ] 00:15:36.694 }' 00:15:36.694 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.694 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.953 [2024-11-20 11:27:44.730883] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.953 [2024-11-20 11:27:44.730931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.953 [2024-11-20 11:27:44.738928] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.953 [2024-11-20 11:27:44.739097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.953 [2024-11-20 11:27:44.739217] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.953 [2024-11-20 11:27:44.739350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.953 [2024-11-20 11:27:44.739465] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.953 [2024-11-20 11:27:44.739524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.953 [2024-11-20 11:27:44.739644] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:36.953 [2024-11-20 11:27:44.739706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.953 [2024-11-20 11:27:44.783539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.953 BaseBdev1 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.953 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.213 [ 00:15:37.213 { 00:15:37.213 "name": "BaseBdev1", 00:15:37.213 "aliases": [ 00:15:37.213 "0f729e44-7764-4b09-9564-a63a6263da3b" 00:15:37.213 ], 00:15:37.213 "product_name": "Malloc disk", 00:15:37.213 "block_size": 512, 00:15:37.213 "num_blocks": 65536, 00:15:37.213 "uuid": "0f729e44-7764-4b09-9564-a63a6263da3b", 00:15:37.213 "assigned_rate_limits": { 00:15:37.213 "rw_ios_per_sec": 0, 00:15:37.214 "rw_mbytes_per_sec": 0, 00:15:37.214 "r_mbytes_per_sec": 0, 00:15:37.214 "w_mbytes_per_sec": 0 00:15:37.214 }, 00:15:37.214 "claimed": true, 00:15:37.214 "claim_type": "exclusive_write", 00:15:37.214 "zoned": false, 00:15:37.214 "supported_io_types": { 00:15:37.214 "read": true, 00:15:37.214 "write": true, 00:15:37.214 "unmap": true, 00:15:37.214 "flush": true, 00:15:37.214 "reset": true, 00:15:37.214 "nvme_admin": false, 00:15:37.214 "nvme_io": false, 00:15:37.214 "nvme_io_md": false, 00:15:37.214 "write_zeroes": true, 00:15:37.214 "zcopy": true, 00:15:37.214 "get_zone_info": false, 00:15:37.214 "zone_management": false, 00:15:37.214 "zone_append": false, 00:15:37.214 "compare": false, 00:15:37.214 "compare_and_write": false, 00:15:37.214 "abort": true, 00:15:37.214 "seek_hole": false, 00:15:37.214 "seek_data": false, 00:15:37.214 "copy": true, 00:15:37.214 "nvme_iov_md": false 00:15:37.214 }, 00:15:37.214 "memory_domains": [ 00:15:37.214 { 00:15:37.214 "dma_device_id": "system", 00:15:37.214 "dma_device_type": 1 00:15:37.214 }, 00:15:37.214 { 00:15:37.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.214 "dma_device_type": 2 00:15:37.214 } 00:15:37.214 ], 00:15:37.214 "driver_specific": {} 00:15:37.214 } 00:15:37.214 ] 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.214 "name": "Existed_Raid", 00:15:37.214 "uuid": "f1c208d8-dc2b-4c39-8bb0-223c9c0dd01d", 00:15:37.214 "strip_size_kb": 0, 00:15:37.214 "state": "configuring", 00:15:37.214 "raid_level": "raid1", 00:15:37.214 "superblock": true, 00:15:37.214 "num_base_bdevs": 4, 00:15:37.214 "num_base_bdevs_discovered": 1, 00:15:37.214 "num_base_bdevs_operational": 4, 00:15:37.214 "base_bdevs_list": [ 00:15:37.214 { 00:15:37.214 "name": "BaseBdev1", 00:15:37.214 "uuid": "0f729e44-7764-4b09-9564-a63a6263da3b", 00:15:37.214 "is_configured": true, 00:15:37.214 "data_offset": 2048, 00:15:37.214 "data_size": 63488 00:15:37.214 }, 00:15:37.214 { 00:15:37.214 "name": "BaseBdev2", 00:15:37.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.214 "is_configured": false, 00:15:37.214 "data_offset": 0, 00:15:37.214 "data_size": 0 00:15:37.214 }, 00:15:37.214 { 00:15:37.214 "name": "BaseBdev3", 00:15:37.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.214 "is_configured": false, 00:15:37.214 "data_offset": 0, 00:15:37.214 "data_size": 0 00:15:37.214 }, 00:15:37.214 { 00:15:37.214 "name": "BaseBdev4", 00:15:37.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.214 "is_configured": false, 00:15:37.214 "data_offset": 0, 00:15:37.214 "data_size": 0 00:15:37.214 } 00:15:37.214 ] 00:15:37.214 }' 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.214 11:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 [2024-11-20 11:27:45.327770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.783 [2024-11-20 11:27:45.328007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 [2024-11-20 11:27:45.335801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.783 [2024-11-20 11:27:45.338507] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.783 [2024-11-20 11:27:45.338700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.783 [2024-11-20 11:27:45.338822] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.783 [2024-11-20 11:27:45.338891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.783 [2024-11-20 11:27:45.339160] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:37.783 [2024-11-20 11:27:45.339221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.783 "name": "Existed_Raid", 00:15:37.783 "uuid": "deccc586-2fda-407b-9dd6-fd63d7a353cf", 00:15:37.783 "strip_size_kb": 0, 00:15:37.783 "state": "configuring", 00:15:37.783 "raid_level": "raid1", 00:15:37.783 "superblock": true, 00:15:37.783 "num_base_bdevs": 4, 00:15:37.783 "num_base_bdevs_discovered": 1, 00:15:37.783 "num_base_bdevs_operational": 4, 00:15:37.783 "base_bdevs_list": [ 00:15:37.783 { 00:15:37.783 "name": "BaseBdev1", 00:15:37.783 "uuid": "0f729e44-7764-4b09-9564-a63a6263da3b", 00:15:37.783 "is_configured": true, 00:15:37.783 "data_offset": 2048, 00:15:37.783 "data_size": 63488 00:15:37.783 }, 00:15:37.783 { 00:15:37.783 "name": "BaseBdev2", 00:15:37.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.783 "is_configured": false, 00:15:37.783 "data_offset": 0, 00:15:37.783 "data_size": 0 00:15:37.783 }, 00:15:37.783 { 00:15:37.783 "name": "BaseBdev3", 00:15:37.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.783 "is_configured": false, 00:15:37.783 "data_offset": 0, 00:15:37.783 "data_size": 0 00:15:37.783 }, 00:15:37.783 { 00:15:37.783 "name": "BaseBdev4", 00:15:37.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.783 "is_configured": false, 00:15:37.783 "data_offset": 0, 00:15:37.783 "data_size": 0 00:15:37.783 } 00:15:37.783 ] 00:15:37.783 }' 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.783 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.042 [2024-11-20 11:27:45.878844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.042 BaseBdev2 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.042 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.300 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.300 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.300 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.300 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.300 [ 00:15:38.300 { 00:15:38.300 "name": "BaseBdev2", 00:15:38.300 "aliases": [ 00:15:38.300 "7ffc111f-0132-4132-84bc-3628c247079b" 00:15:38.301 ], 00:15:38.301 "product_name": "Malloc disk", 00:15:38.301 "block_size": 512, 00:15:38.301 "num_blocks": 65536, 00:15:38.301 "uuid": "7ffc111f-0132-4132-84bc-3628c247079b", 00:15:38.301 "assigned_rate_limits": { 00:15:38.301 "rw_ios_per_sec": 0, 00:15:38.301 "rw_mbytes_per_sec": 0, 00:15:38.301 "r_mbytes_per_sec": 0, 00:15:38.301 "w_mbytes_per_sec": 0 00:15:38.301 }, 00:15:38.301 "claimed": true, 00:15:38.301 "claim_type": "exclusive_write", 00:15:38.301 "zoned": false, 00:15:38.301 "supported_io_types": { 00:15:38.301 "read": true, 00:15:38.301 "write": true, 00:15:38.301 "unmap": true, 00:15:38.301 "flush": true, 00:15:38.301 "reset": true, 00:15:38.301 "nvme_admin": false, 00:15:38.301 "nvme_io": false, 00:15:38.301 "nvme_io_md": false, 00:15:38.301 "write_zeroes": true, 00:15:38.301 "zcopy": true, 00:15:38.301 "get_zone_info": false, 00:15:38.301 "zone_management": false, 00:15:38.301 "zone_append": false, 00:15:38.301 "compare": false, 00:15:38.301 "compare_and_write": false, 00:15:38.301 "abort": true, 00:15:38.301 "seek_hole": false, 00:15:38.301 "seek_data": false, 00:15:38.301 "copy": true, 00:15:38.301 "nvme_iov_md": false 00:15:38.301 }, 00:15:38.301 "memory_domains": [ 00:15:38.301 { 00:15:38.301 "dma_device_id": "system", 00:15:38.301 "dma_device_type": 1 00:15:38.301 }, 00:15:38.301 { 00:15:38.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.301 "dma_device_type": 2 00:15:38.301 } 00:15:38.301 ], 00:15:38.301 "driver_specific": {} 00:15:38.301 } 00:15:38.301 ] 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.301 "name": "Existed_Raid", 00:15:38.301 "uuid": "deccc586-2fda-407b-9dd6-fd63d7a353cf", 00:15:38.301 "strip_size_kb": 0, 00:15:38.301 "state": "configuring", 00:15:38.301 "raid_level": "raid1", 00:15:38.301 "superblock": true, 00:15:38.301 "num_base_bdevs": 4, 00:15:38.301 "num_base_bdevs_discovered": 2, 00:15:38.301 "num_base_bdevs_operational": 4, 00:15:38.301 "base_bdevs_list": [ 00:15:38.301 { 00:15:38.301 "name": "BaseBdev1", 00:15:38.301 "uuid": "0f729e44-7764-4b09-9564-a63a6263da3b", 00:15:38.301 "is_configured": true, 00:15:38.301 "data_offset": 2048, 00:15:38.301 "data_size": 63488 00:15:38.301 }, 00:15:38.301 { 00:15:38.301 "name": "BaseBdev2", 00:15:38.301 "uuid": "7ffc111f-0132-4132-84bc-3628c247079b", 00:15:38.301 "is_configured": true, 00:15:38.301 "data_offset": 2048, 00:15:38.301 "data_size": 63488 00:15:38.301 }, 00:15:38.301 { 00:15:38.301 "name": "BaseBdev3", 00:15:38.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.301 "is_configured": false, 00:15:38.301 "data_offset": 0, 00:15:38.301 "data_size": 0 00:15:38.301 }, 00:15:38.301 { 00:15:38.301 "name": "BaseBdev4", 00:15:38.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.301 "is_configured": false, 00:15:38.301 "data_offset": 0, 00:15:38.301 "data_size": 0 00:15:38.301 } 00:15:38.301 ] 00:15:38.301 }' 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.301 11:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 BaseBdev3 00:15:38.869 [2024-11-20 11:27:46.481798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 [ 00:15:38.869 { 00:15:38.869 "name": "BaseBdev3", 00:15:38.869 "aliases": [ 00:15:38.869 "d471890f-4141-4fd9-aabe-2845a064e32d" 00:15:38.869 ], 00:15:38.869 "product_name": "Malloc disk", 00:15:38.869 "block_size": 512, 00:15:38.869 "num_blocks": 65536, 00:15:38.869 "uuid": "d471890f-4141-4fd9-aabe-2845a064e32d", 00:15:38.869 "assigned_rate_limits": { 00:15:38.869 "rw_ios_per_sec": 0, 00:15:38.869 "rw_mbytes_per_sec": 0, 00:15:38.869 "r_mbytes_per_sec": 0, 00:15:38.869 "w_mbytes_per_sec": 0 00:15:38.869 }, 00:15:38.869 "claimed": true, 00:15:38.869 "claim_type": "exclusive_write", 00:15:38.869 "zoned": false, 00:15:38.869 "supported_io_types": { 00:15:38.869 "read": true, 00:15:38.869 "write": true, 00:15:38.869 "unmap": true, 00:15:38.869 "flush": true, 00:15:38.869 "reset": true, 00:15:38.869 "nvme_admin": false, 00:15:38.869 "nvme_io": false, 00:15:38.869 "nvme_io_md": false, 00:15:38.869 "write_zeroes": true, 00:15:38.869 "zcopy": true, 00:15:38.869 "get_zone_info": false, 00:15:38.869 "zone_management": false, 00:15:38.869 "zone_append": false, 00:15:38.869 "compare": false, 00:15:38.869 "compare_and_write": false, 00:15:38.869 "abort": true, 00:15:38.869 "seek_hole": false, 00:15:38.869 "seek_data": false, 00:15:38.869 "copy": true, 00:15:38.869 "nvme_iov_md": false 00:15:38.869 }, 00:15:38.869 "memory_domains": [ 00:15:38.869 { 00:15:38.869 "dma_device_id": "system", 00:15:38.869 "dma_device_type": 1 00:15:38.869 }, 00:15:38.869 { 00:15:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.869 "dma_device_type": 2 00:15:38.869 } 00:15:38.869 ], 00:15:38.869 "driver_specific": {} 00:15:38.869 } 00:15:38.869 ] 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.869 "name": "Existed_Raid", 00:15:38.869 "uuid": "deccc586-2fda-407b-9dd6-fd63d7a353cf", 00:15:38.869 "strip_size_kb": 0, 00:15:38.869 "state": "configuring", 00:15:38.869 "raid_level": "raid1", 00:15:38.869 "superblock": true, 00:15:38.869 "num_base_bdevs": 4, 00:15:38.869 "num_base_bdevs_discovered": 3, 00:15:38.869 "num_base_bdevs_operational": 4, 00:15:38.869 "base_bdevs_list": [ 00:15:38.869 { 00:15:38.869 "name": "BaseBdev1", 00:15:38.869 "uuid": "0f729e44-7764-4b09-9564-a63a6263da3b", 00:15:38.869 "is_configured": true, 00:15:38.869 "data_offset": 2048, 00:15:38.869 "data_size": 63488 00:15:38.869 }, 00:15:38.869 { 00:15:38.869 "name": "BaseBdev2", 00:15:38.869 "uuid": "7ffc111f-0132-4132-84bc-3628c247079b", 00:15:38.869 "is_configured": true, 00:15:38.869 "data_offset": 2048, 00:15:38.869 "data_size": 63488 00:15:38.869 }, 00:15:38.869 { 00:15:38.869 "name": "BaseBdev3", 00:15:38.869 "uuid": "d471890f-4141-4fd9-aabe-2845a064e32d", 00:15:38.869 "is_configured": true, 00:15:38.869 "data_offset": 2048, 00:15:38.869 "data_size": 63488 00:15:38.869 }, 00:15:38.869 { 00:15:38.869 "name": "BaseBdev4", 00:15:38.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.869 "is_configured": false, 00:15:38.869 "data_offset": 0, 00:15:38.869 "data_size": 0 00:15:38.869 } 00:15:38.869 ] 00:15:38.869 }' 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.869 11:27:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.437 [2024-11-20 11:27:47.070302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:39.437 BaseBdev4 00:15:39.437 [2024-11-20 11:27:47.070850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:39.437 [2024-11-20 11:27:47.070876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:39.437 [2024-11-20 11:27:47.071251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:39.437 [2024-11-20 11:27:47.071455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:39.437 [2024-11-20 11:27:47.071479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:39.437 [2024-11-20 11:27:47.071697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.437 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.438 [ 00:15:39.438 { 00:15:39.438 "name": "BaseBdev4", 00:15:39.438 "aliases": [ 00:15:39.438 "39691929-19ad-4b56-b8b5-3e41e1556fa1" 00:15:39.438 ], 00:15:39.438 "product_name": "Malloc disk", 00:15:39.438 "block_size": 512, 00:15:39.438 "num_blocks": 65536, 00:15:39.438 "uuid": "39691929-19ad-4b56-b8b5-3e41e1556fa1", 00:15:39.438 "assigned_rate_limits": { 00:15:39.438 "rw_ios_per_sec": 0, 00:15:39.438 "rw_mbytes_per_sec": 0, 00:15:39.438 "r_mbytes_per_sec": 0, 00:15:39.438 "w_mbytes_per_sec": 0 00:15:39.438 }, 00:15:39.438 "claimed": true, 00:15:39.438 "claim_type": "exclusive_write", 00:15:39.438 "zoned": false, 00:15:39.438 "supported_io_types": { 00:15:39.438 "read": true, 00:15:39.438 "write": true, 00:15:39.438 "unmap": true, 00:15:39.438 "flush": true, 00:15:39.438 "reset": true, 00:15:39.438 "nvme_admin": false, 00:15:39.438 "nvme_io": false, 00:15:39.438 "nvme_io_md": false, 00:15:39.438 "write_zeroes": true, 00:15:39.438 "zcopy": true, 00:15:39.438 "get_zone_info": false, 00:15:39.438 "zone_management": false, 00:15:39.438 "zone_append": false, 00:15:39.438 "compare": false, 00:15:39.438 "compare_and_write": false, 00:15:39.438 "abort": true, 00:15:39.438 "seek_hole": false, 00:15:39.438 "seek_data": false, 00:15:39.438 "copy": true, 00:15:39.438 "nvme_iov_md": false 00:15:39.438 }, 00:15:39.438 "memory_domains": [ 00:15:39.438 { 00:15:39.438 "dma_device_id": "system", 00:15:39.438 "dma_device_type": 1 00:15:39.438 }, 00:15:39.438 { 00:15:39.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.438 "dma_device_type": 2 00:15:39.438 } 00:15:39.438 ], 00:15:39.438 "driver_specific": {} 00:15:39.438 } 00:15:39.438 ] 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.438 "name": "Existed_Raid", 00:15:39.438 "uuid": "deccc586-2fda-407b-9dd6-fd63d7a353cf", 00:15:39.438 "strip_size_kb": 0, 00:15:39.438 "state": "online", 00:15:39.438 "raid_level": "raid1", 00:15:39.438 "superblock": true, 00:15:39.438 "num_base_bdevs": 4, 00:15:39.438 "num_base_bdevs_discovered": 4, 00:15:39.438 "num_base_bdevs_operational": 4, 00:15:39.438 "base_bdevs_list": [ 00:15:39.438 { 00:15:39.438 "name": "BaseBdev1", 00:15:39.438 "uuid": "0f729e44-7764-4b09-9564-a63a6263da3b", 00:15:39.438 "is_configured": true, 00:15:39.438 "data_offset": 2048, 00:15:39.438 "data_size": 63488 00:15:39.438 }, 00:15:39.438 { 00:15:39.438 "name": "BaseBdev2", 00:15:39.438 "uuid": "7ffc111f-0132-4132-84bc-3628c247079b", 00:15:39.438 "is_configured": true, 00:15:39.438 "data_offset": 2048, 00:15:39.438 "data_size": 63488 00:15:39.438 }, 00:15:39.438 { 00:15:39.438 "name": "BaseBdev3", 00:15:39.438 "uuid": "d471890f-4141-4fd9-aabe-2845a064e32d", 00:15:39.438 "is_configured": true, 00:15:39.438 "data_offset": 2048, 00:15:39.438 "data_size": 63488 00:15:39.438 }, 00:15:39.438 { 00:15:39.438 "name": "BaseBdev4", 00:15:39.438 "uuid": "39691929-19ad-4b56-b8b5-3e41e1556fa1", 00:15:39.438 "is_configured": true, 00:15:39.438 "data_offset": 2048, 00:15:39.438 "data_size": 63488 00:15:39.438 } 00:15:39.438 ] 00:15:39.438 }' 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.438 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.006 [2024-11-20 11:27:47.630970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.006 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:40.006 "name": "Existed_Raid", 00:15:40.006 "aliases": [ 00:15:40.006 "deccc586-2fda-407b-9dd6-fd63d7a353cf" 00:15:40.006 ], 00:15:40.006 "product_name": "Raid Volume", 00:15:40.006 "block_size": 512, 00:15:40.006 "num_blocks": 63488, 00:15:40.006 "uuid": "deccc586-2fda-407b-9dd6-fd63d7a353cf", 00:15:40.006 "assigned_rate_limits": { 00:15:40.006 "rw_ios_per_sec": 0, 00:15:40.006 "rw_mbytes_per_sec": 0, 00:15:40.006 "r_mbytes_per_sec": 0, 00:15:40.006 "w_mbytes_per_sec": 0 00:15:40.006 }, 00:15:40.006 "claimed": false, 00:15:40.006 "zoned": false, 00:15:40.006 "supported_io_types": { 00:15:40.006 "read": true, 00:15:40.006 "write": true, 00:15:40.006 "unmap": false, 00:15:40.006 "flush": false, 00:15:40.006 "reset": true, 00:15:40.006 "nvme_admin": false, 00:15:40.006 "nvme_io": false, 00:15:40.006 "nvme_io_md": false, 00:15:40.007 "write_zeroes": true, 00:15:40.007 "zcopy": false, 00:15:40.007 "get_zone_info": false, 00:15:40.007 "zone_management": false, 00:15:40.007 "zone_append": false, 00:15:40.007 "compare": false, 00:15:40.007 "compare_and_write": false, 00:15:40.007 "abort": false, 00:15:40.007 "seek_hole": false, 00:15:40.007 "seek_data": false, 00:15:40.007 "copy": false, 00:15:40.007 "nvme_iov_md": false 00:15:40.007 }, 00:15:40.007 "memory_domains": [ 00:15:40.007 { 00:15:40.007 "dma_device_id": "system", 00:15:40.007 "dma_device_type": 1 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.007 "dma_device_type": 2 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "dma_device_id": "system", 00:15:40.007 "dma_device_type": 1 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.007 "dma_device_type": 2 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "dma_device_id": "system", 00:15:40.007 "dma_device_type": 1 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.007 "dma_device_type": 2 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "dma_device_id": "system", 00:15:40.007 "dma_device_type": 1 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.007 "dma_device_type": 2 00:15:40.007 } 00:15:40.007 ], 00:15:40.007 "driver_specific": { 00:15:40.007 "raid": { 00:15:40.007 "uuid": "deccc586-2fda-407b-9dd6-fd63d7a353cf", 00:15:40.007 "strip_size_kb": 0, 00:15:40.007 "state": "online", 00:15:40.007 "raid_level": "raid1", 00:15:40.007 "superblock": true, 00:15:40.007 "num_base_bdevs": 4, 00:15:40.007 "num_base_bdevs_discovered": 4, 00:15:40.007 "num_base_bdevs_operational": 4, 00:15:40.007 "base_bdevs_list": [ 00:15:40.007 { 00:15:40.007 "name": "BaseBdev1", 00:15:40.007 "uuid": "0f729e44-7764-4b09-9564-a63a6263da3b", 00:15:40.007 "is_configured": true, 00:15:40.007 "data_offset": 2048, 00:15:40.007 "data_size": 63488 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "name": "BaseBdev2", 00:15:40.007 "uuid": "7ffc111f-0132-4132-84bc-3628c247079b", 00:15:40.007 "is_configured": true, 00:15:40.007 "data_offset": 2048, 00:15:40.007 "data_size": 63488 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "name": "BaseBdev3", 00:15:40.007 "uuid": "d471890f-4141-4fd9-aabe-2845a064e32d", 00:15:40.007 "is_configured": true, 00:15:40.007 "data_offset": 2048, 00:15:40.007 "data_size": 63488 00:15:40.007 }, 00:15:40.007 { 00:15:40.007 "name": "BaseBdev4", 00:15:40.007 "uuid": "39691929-19ad-4b56-b8b5-3e41e1556fa1", 00:15:40.007 "is_configured": true, 00:15:40.007 "data_offset": 2048, 00:15:40.007 "data_size": 63488 00:15:40.007 } 00:15:40.007 ] 00:15:40.007 } 00:15:40.007 } 00:15:40.007 }' 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:40.007 BaseBdev2 00:15:40.007 BaseBdev3 00:15:40.007 BaseBdev4' 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.007 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 11:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.267 [2024-11-20 11:27:48.002721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.526 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.526 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.526 "name": "Existed_Raid", 00:15:40.526 "uuid": "deccc586-2fda-407b-9dd6-fd63d7a353cf", 00:15:40.526 "strip_size_kb": 0, 00:15:40.526 "state": "online", 00:15:40.526 "raid_level": "raid1", 00:15:40.526 "superblock": true, 00:15:40.526 "num_base_bdevs": 4, 00:15:40.526 "num_base_bdevs_discovered": 3, 00:15:40.526 "num_base_bdevs_operational": 3, 00:15:40.526 "base_bdevs_list": [ 00:15:40.526 { 00:15:40.526 "name": null, 00:15:40.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.526 "is_configured": false, 00:15:40.526 "data_offset": 0, 00:15:40.526 "data_size": 63488 00:15:40.526 }, 00:15:40.526 { 00:15:40.526 "name": "BaseBdev2", 00:15:40.526 "uuid": "7ffc111f-0132-4132-84bc-3628c247079b", 00:15:40.526 "is_configured": true, 00:15:40.526 "data_offset": 2048, 00:15:40.526 "data_size": 63488 00:15:40.526 }, 00:15:40.526 { 00:15:40.526 "name": "BaseBdev3", 00:15:40.526 "uuid": "d471890f-4141-4fd9-aabe-2845a064e32d", 00:15:40.526 "is_configured": true, 00:15:40.526 "data_offset": 2048, 00:15:40.526 "data_size": 63488 00:15:40.526 }, 00:15:40.526 { 00:15:40.526 "name": "BaseBdev4", 00:15:40.526 "uuid": "39691929-19ad-4b56-b8b5-3e41e1556fa1", 00:15:40.526 "is_configured": true, 00:15:40.526 "data_offset": 2048, 00:15:40.526 "data_size": 63488 00:15:40.526 } 00:15:40.526 ] 00:15:40.526 }' 00:15:40.526 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.526 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.093 [2024-11-20 11:27:48.704866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.093 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.094 [2024-11-20 11:27:48.847044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.094 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.352 11:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.352 [2024-11-20 11:27:48.997384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:41.352 [2024-11-20 11:27:48.997673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.352 [2024-11-20 11:27:49.083067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.352 [2024-11-20 11:27:49.083352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.352 [2024-11-20 11:27:49.083389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.352 BaseBdev2 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.352 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.610 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.610 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.610 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.610 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.610 [ 00:15:41.610 { 00:15:41.610 "name": "BaseBdev2", 00:15:41.610 "aliases": [ 00:15:41.610 "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039" 00:15:41.610 ], 00:15:41.610 "product_name": "Malloc disk", 00:15:41.610 "block_size": 512, 00:15:41.610 "num_blocks": 65536, 00:15:41.610 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:41.610 "assigned_rate_limits": { 00:15:41.610 "rw_ios_per_sec": 0, 00:15:41.610 "rw_mbytes_per_sec": 0, 00:15:41.610 "r_mbytes_per_sec": 0, 00:15:41.610 "w_mbytes_per_sec": 0 00:15:41.610 }, 00:15:41.610 "claimed": false, 00:15:41.610 "zoned": false, 00:15:41.610 "supported_io_types": { 00:15:41.610 "read": true, 00:15:41.610 "write": true, 00:15:41.610 "unmap": true, 00:15:41.610 "flush": true, 00:15:41.610 "reset": true, 00:15:41.610 "nvme_admin": false, 00:15:41.610 "nvme_io": false, 00:15:41.610 "nvme_io_md": false, 00:15:41.610 "write_zeroes": true, 00:15:41.610 "zcopy": true, 00:15:41.610 "get_zone_info": false, 00:15:41.610 "zone_management": false, 00:15:41.610 "zone_append": false, 00:15:41.610 "compare": false, 00:15:41.610 "compare_and_write": false, 00:15:41.610 "abort": true, 00:15:41.610 "seek_hole": false, 00:15:41.610 "seek_data": false, 00:15:41.610 "copy": true, 00:15:41.611 "nvme_iov_md": false 00:15:41.611 }, 00:15:41.611 "memory_domains": [ 00:15:41.611 { 00:15:41.611 "dma_device_id": "system", 00:15:41.611 "dma_device_type": 1 00:15:41.611 }, 00:15:41.611 { 00:15:41.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.611 "dma_device_type": 2 00:15:41.611 } 00:15:41.611 ], 00:15:41.611 "driver_specific": {} 00:15:41.611 } 00:15:41.611 ] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.611 BaseBdev3 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.611 [ 00:15:41.611 { 00:15:41.611 "name": "BaseBdev3", 00:15:41.611 "aliases": [ 00:15:41.611 "66491000-eee7-48bb-92e3-00f84d19651f" 00:15:41.611 ], 00:15:41.611 "product_name": "Malloc disk", 00:15:41.611 "block_size": 512, 00:15:41.611 "num_blocks": 65536, 00:15:41.611 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:41.611 "assigned_rate_limits": { 00:15:41.611 "rw_ios_per_sec": 0, 00:15:41.611 "rw_mbytes_per_sec": 0, 00:15:41.611 "r_mbytes_per_sec": 0, 00:15:41.611 "w_mbytes_per_sec": 0 00:15:41.611 }, 00:15:41.611 "claimed": false, 00:15:41.611 "zoned": false, 00:15:41.611 "supported_io_types": { 00:15:41.611 "read": true, 00:15:41.611 "write": true, 00:15:41.611 "unmap": true, 00:15:41.611 "flush": true, 00:15:41.611 "reset": true, 00:15:41.611 "nvme_admin": false, 00:15:41.611 "nvme_io": false, 00:15:41.611 "nvme_io_md": false, 00:15:41.611 "write_zeroes": true, 00:15:41.611 "zcopy": true, 00:15:41.611 "get_zone_info": false, 00:15:41.611 "zone_management": false, 00:15:41.611 "zone_append": false, 00:15:41.611 "compare": false, 00:15:41.611 "compare_and_write": false, 00:15:41.611 "abort": true, 00:15:41.611 "seek_hole": false, 00:15:41.611 "seek_data": false, 00:15:41.611 "copy": true, 00:15:41.611 "nvme_iov_md": false 00:15:41.611 }, 00:15:41.611 "memory_domains": [ 00:15:41.611 { 00:15:41.611 "dma_device_id": "system", 00:15:41.611 "dma_device_type": 1 00:15:41.611 }, 00:15:41.611 { 00:15:41.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.611 "dma_device_type": 2 00:15:41.611 } 00:15:41.611 ], 00:15:41.611 "driver_specific": {} 00:15:41.611 } 00:15:41.611 ] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.611 BaseBdev4 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.611 [ 00:15:41.611 { 00:15:41.611 "name": "BaseBdev4", 00:15:41.611 "aliases": [ 00:15:41.611 "cbfb6ff9-0960-4a56-8fc9-269efb9168f6" 00:15:41.611 ], 00:15:41.611 "product_name": "Malloc disk", 00:15:41.611 "block_size": 512, 00:15:41.611 "num_blocks": 65536, 00:15:41.611 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:41.611 "assigned_rate_limits": { 00:15:41.611 "rw_ios_per_sec": 0, 00:15:41.611 "rw_mbytes_per_sec": 0, 00:15:41.611 "r_mbytes_per_sec": 0, 00:15:41.611 "w_mbytes_per_sec": 0 00:15:41.611 }, 00:15:41.611 "claimed": false, 00:15:41.611 "zoned": false, 00:15:41.611 "supported_io_types": { 00:15:41.611 "read": true, 00:15:41.611 "write": true, 00:15:41.611 "unmap": true, 00:15:41.611 "flush": true, 00:15:41.611 "reset": true, 00:15:41.611 "nvme_admin": false, 00:15:41.611 "nvme_io": false, 00:15:41.611 "nvme_io_md": false, 00:15:41.611 "write_zeroes": true, 00:15:41.611 "zcopy": true, 00:15:41.611 "get_zone_info": false, 00:15:41.611 "zone_management": false, 00:15:41.611 "zone_append": false, 00:15:41.611 "compare": false, 00:15:41.611 "compare_and_write": false, 00:15:41.611 "abort": true, 00:15:41.611 "seek_hole": false, 00:15:41.611 "seek_data": false, 00:15:41.611 "copy": true, 00:15:41.611 "nvme_iov_md": false 00:15:41.611 }, 00:15:41.611 "memory_domains": [ 00:15:41.611 { 00:15:41.611 "dma_device_id": "system", 00:15:41.611 "dma_device_type": 1 00:15:41.611 }, 00:15:41.611 { 00:15:41.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.611 "dma_device_type": 2 00:15:41.611 } 00:15:41.611 ], 00:15:41.611 "driver_specific": {} 00:15:41.611 } 00:15:41.611 ] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.611 [2024-11-20 11:27:49.362936] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.611 [2024-11-20 11:27:49.363124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.611 [2024-11-20 11:27:49.363278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.611 [2024-11-20 11:27:49.365815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.611 [2024-11-20 11:27:49.366016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.611 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.612 "name": "Existed_Raid", 00:15:41.612 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:41.612 "strip_size_kb": 0, 00:15:41.612 "state": "configuring", 00:15:41.612 "raid_level": "raid1", 00:15:41.612 "superblock": true, 00:15:41.612 "num_base_bdevs": 4, 00:15:41.612 "num_base_bdevs_discovered": 3, 00:15:41.612 "num_base_bdevs_operational": 4, 00:15:41.612 "base_bdevs_list": [ 00:15:41.612 { 00:15:41.612 "name": "BaseBdev1", 00:15:41.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.612 "is_configured": false, 00:15:41.612 "data_offset": 0, 00:15:41.612 "data_size": 0 00:15:41.612 }, 00:15:41.612 { 00:15:41.612 "name": "BaseBdev2", 00:15:41.612 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:41.612 "is_configured": true, 00:15:41.612 "data_offset": 2048, 00:15:41.612 "data_size": 63488 00:15:41.612 }, 00:15:41.612 { 00:15:41.612 "name": "BaseBdev3", 00:15:41.612 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:41.612 "is_configured": true, 00:15:41.612 "data_offset": 2048, 00:15:41.612 "data_size": 63488 00:15:41.612 }, 00:15:41.612 { 00:15:41.612 "name": "BaseBdev4", 00:15:41.612 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:41.612 "is_configured": true, 00:15:41.612 "data_offset": 2048, 00:15:41.612 "data_size": 63488 00:15:41.612 } 00:15:41.612 ] 00:15:41.612 }' 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.612 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.180 [2024-11-20 11:27:49.851083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.180 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.180 "name": "Existed_Raid", 00:15:42.180 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:42.180 "strip_size_kb": 0, 00:15:42.180 "state": "configuring", 00:15:42.180 "raid_level": "raid1", 00:15:42.180 "superblock": true, 00:15:42.180 "num_base_bdevs": 4, 00:15:42.180 "num_base_bdevs_discovered": 2, 00:15:42.180 "num_base_bdevs_operational": 4, 00:15:42.180 "base_bdevs_list": [ 00:15:42.180 { 00:15:42.180 "name": "BaseBdev1", 00:15:42.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.180 "is_configured": false, 00:15:42.180 "data_offset": 0, 00:15:42.180 "data_size": 0 00:15:42.180 }, 00:15:42.180 { 00:15:42.180 "name": null, 00:15:42.180 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:42.180 "is_configured": false, 00:15:42.181 "data_offset": 0, 00:15:42.181 "data_size": 63488 00:15:42.181 }, 00:15:42.181 { 00:15:42.181 "name": "BaseBdev3", 00:15:42.181 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:42.181 "is_configured": true, 00:15:42.181 "data_offset": 2048, 00:15:42.181 "data_size": 63488 00:15:42.181 }, 00:15:42.181 { 00:15:42.181 "name": "BaseBdev4", 00:15:42.181 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:42.181 "is_configured": true, 00:15:42.181 "data_offset": 2048, 00:15:42.181 "data_size": 63488 00:15:42.181 } 00:15:42.181 ] 00:15:42.181 }' 00:15:42.181 11:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.181 11:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.773 [2024-11-20 11:27:50.432797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.773 BaseBdev1 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.773 [ 00:15:42.773 { 00:15:42.773 "name": "BaseBdev1", 00:15:42.773 "aliases": [ 00:15:42.773 "213a2f83-7052-475e-811d-bb4365c50ed0" 00:15:42.773 ], 00:15:42.773 "product_name": "Malloc disk", 00:15:42.773 "block_size": 512, 00:15:42.773 "num_blocks": 65536, 00:15:42.773 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:42.773 "assigned_rate_limits": { 00:15:42.773 "rw_ios_per_sec": 0, 00:15:42.773 "rw_mbytes_per_sec": 0, 00:15:42.773 "r_mbytes_per_sec": 0, 00:15:42.773 "w_mbytes_per_sec": 0 00:15:42.773 }, 00:15:42.773 "claimed": true, 00:15:42.773 "claim_type": "exclusive_write", 00:15:42.773 "zoned": false, 00:15:42.773 "supported_io_types": { 00:15:42.773 "read": true, 00:15:42.773 "write": true, 00:15:42.773 "unmap": true, 00:15:42.773 "flush": true, 00:15:42.773 "reset": true, 00:15:42.773 "nvme_admin": false, 00:15:42.773 "nvme_io": false, 00:15:42.773 "nvme_io_md": false, 00:15:42.773 "write_zeroes": true, 00:15:42.773 "zcopy": true, 00:15:42.773 "get_zone_info": false, 00:15:42.773 "zone_management": false, 00:15:42.773 "zone_append": false, 00:15:42.773 "compare": false, 00:15:42.773 "compare_and_write": false, 00:15:42.773 "abort": true, 00:15:42.773 "seek_hole": false, 00:15:42.773 "seek_data": false, 00:15:42.773 "copy": true, 00:15:42.773 "nvme_iov_md": false 00:15:42.773 }, 00:15:42.773 "memory_domains": [ 00:15:42.773 { 00:15:42.773 "dma_device_id": "system", 00:15:42.773 "dma_device_type": 1 00:15:42.773 }, 00:15:42.773 { 00:15:42.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.773 "dma_device_type": 2 00:15:42.773 } 00:15:42.773 ], 00:15:42.773 "driver_specific": {} 00:15:42.773 } 00:15:42.773 ] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.773 "name": "Existed_Raid", 00:15:42.773 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:42.773 "strip_size_kb": 0, 00:15:42.773 "state": "configuring", 00:15:42.773 "raid_level": "raid1", 00:15:42.773 "superblock": true, 00:15:42.773 "num_base_bdevs": 4, 00:15:42.773 "num_base_bdevs_discovered": 3, 00:15:42.773 "num_base_bdevs_operational": 4, 00:15:42.773 "base_bdevs_list": [ 00:15:42.773 { 00:15:42.773 "name": "BaseBdev1", 00:15:42.773 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:42.773 "is_configured": true, 00:15:42.773 "data_offset": 2048, 00:15:42.773 "data_size": 63488 00:15:42.773 }, 00:15:42.773 { 00:15:42.773 "name": null, 00:15:42.773 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:42.773 "is_configured": false, 00:15:42.773 "data_offset": 0, 00:15:42.773 "data_size": 63488 00:15:42.773 }, 00:15:42.773 { 00:15:42.773 "name": "BaseBdev3", 00:15:42.773 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:42.773 "is_configured": true, 00:15:42.773 "data_offset": 2048, 00:15:42.773 "data_size": 63488 00:15:42.773 }, 00:15:42.773 { 00:15:42.773 "name": "BaseBdev4", 00:15:42.773 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:42.773 "is_configured": true, 00:15:42.773 "data_offset": 2048, 00:15:42.773 "data_size": 63488 00:15:42.773 } 00:15:42.773 ] 00:15:42.773 }' 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.773 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.341 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.342 11:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:43.342 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.342 11:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.342 [2024-11-20 11:27:51.033043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.342 "name": "Existed_Raid", 00:15:43.342 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:43.342 "strip_size_kb": 0, 00:15:43.342 "state": "configuring", 00:15:43.342 "raid_level": "raid1", 00:15:43.342 "superblock": true, 00:15:43.342 "num_base_bdevs": 4, 00:15:43.342 "num_base_bdevs_discovered": 2, 00:15:43.342 "num_base_bdevs_operational": 4, 00:15:43.342 "base_bdevs_list": [ 00:15:43.342 { 00:15:43.342 "name": "BaseBdev1", 00:15:43.342 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:43.342 "is_configured": true, 00:15:43.342 "data_offset": 2048, 00:15:43.342 "data_size": 63488 00:15:43.342 }, 00:15:43.342 { 00:15:43.342 "name": null, 00:15:43.342 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:43.342 "is_configured": false, 00:15:43.342 "data_offset": 0, 00:15:43.342 "data_size": 63488 00:15:43.342 }, 00:15:43.342 { 00:15:43.342 "name": null, 00:15:43.342 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:43.342 "is_configured": false, 00:15:43.342 "data_offset": 0, 00:15:43.342 "data_size": 63488 00:15:43.342 }, 00:15:43.342 { 00:15:43.342 "name": "BaseBdev4", 00:15:43.342 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:43.342 "is_configured": true, 00:15:43.342 "data_offset": 2048, 00:15:43.342 "data_size": 63488 00:15:43.342 } 00:15:43.342 ] 00:15:43.342 }' 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.342 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 [2024-11-20 11:27:51.597165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.910 "name": "Existed_Raid", 00:15:43.910 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:43.910 "strip_size_kb": 0, 00:15:43.910 "state": "configuring", 00:15:43.910 "raid_level": "raid1", 00:15:43.910 "superblock": true, 00:15:43.910 "num_base_bdevs": 4, 00:15:43.910 "num_base_bdevs_discovered": 3, 00:15:43.910 "num_base_bdevs_operational": 4, 00:15:43.910 "base_bdevs_list": [ 00:15:43.910 { 00:15:43.910 "name": "BaseBdev1", 00:15:43.910 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:43.910 "is_configured": true, 00:15:43.910 "data_offset": 2048, 00:15:43.910 "data_size": 63488 00:15:43.910 }, 00:15:43.910 { 00:15:43.910 "name": null, 00:15:43.910 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:43.910 "is_configured": false, 00:15:43.910 "data_offset": 0, 00:15:43.910 "data_size": 63488 00:15:43.910 }, 00:15:43.910 { 00:15:43.910 "name": "BaseBdev3", 00:15:43.910 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:43.910 "is_configured": true, 00:15:43.910 "data_offset": 2048, 00:15:43.910 "data_size": 63488 00:15:43.910 }, 00:15:43.910 { 00:15:43.910 "name": "BaseBdev4", 00:15:43.910 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:43.910 "is_configured": true, 00:15:43.910 "data_offset": 2048, 00:15:43.910 "data_size": 63488 00:15:43.910 } 00:15:43.910 ] 00:15:43.910 }' 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.910 11:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.478 [2024-11-20 11:27:52.157343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.478 "name": "Existed_Raid", 00:15:44.478 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:44.478 "strip_size_kb": 0, 00:15:44.478 "state": "configuring", 00:15:44.478 "raid_level": "raid1", 00:15:44.478 "superblock": true, 00:15:44.478 "num_base_bdevs": 4, 00:15:44.478 "num_base_bdevs_discovered": 2, 00:15:44.478 "num_base_bdevs_operational": 4, 00:15:44.478 "base_bdevs_list": [ 00:15:44.478 { 00:15:44.478 "name": null, 00:15:44.478 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:44.478 "is_configured": false, 00:15:44.478 "data_offset": 0, 00:15:44.478 "data_size": 63488 00:15:44.478 }, 00:15:44.478 { 00:15:44.478 "name": null, 00:15:44.478 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:44.478 "is_configured": false, 00:15:44.478 "data_offset": 0, 00:15:44.478 "data_size": 63488 00:15:44.478 }, 00:15:44.478 { 00:15:44.478 "name": "BaseBdev3", 00:15:44.478 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:44.478 "is_configured": true, 00:15:44.478 "data_offset": 2048, 00:15:44.478 "data_size": 63488 00:15:44.478 }, 00:15:44.478 { 00:15:44.478 "name": "BaseBdev4", 00:15:44.478 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:44.478 "is_configured": true, 00:15:44.478 "data_offset": 2048, 00:15:44.478 "data_size": 63488 00:15:44.478 } 00:15:44.478 ] 00:15:44.478 }' 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.478 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.045 [2024-11-20 11:27:52.811794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.045 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.046 "name": "Existed_Raid", 00:15:45.046 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:45.046 "strip_size_kb": 0, 00:15:45.046 "state": "configuring", 00:15:45.046 "raid_level": "raid1", 00:15:45.046 "superblock": true, 00:15:45.046 "num_base_bdevs": 4, 00:15:45.046 "num_base_bdevs_discovered": 3, 00:15:45.046 "num_base_bdevs_operational": 4, 00:15:45.046 "base_bdevs_list": [ 00:15:45.046 { 00:15:45.046 "name": null, 00:15:45.046 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:45.046 "is_configured": false, 00:15:45.046 "data_offset": 0, 00:15:45.046 "data_size": 63488 00:15:45.046 }, 00:15:45.046 { 00:15:45.046 "name": "BaseBdev2", 00:15:45.046 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:45.046 "is_configured": true, 00:15:45.046 "data_offset": 2048, 00:15:45.046 "data_size": 63488 00:15:45.046 }, 00:15:45.046 { 00:15:45.046 "name": "BaseBdev3", 00:15:45.046 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:45.046 "is_configured": true, 00:15:45.046 "data_offset": 2048, 00:15:45.046 "data_size": 63488 00:15:45.046 }, 00:15:45.046 { 00:15:45.046 "name": "BaseBdev4", 00:15:45.046 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:45.046 "is_configured": true, 00:15:45.046 "data_offset": 2048, 00:15:45.046 "data_size": 63488 00:15:45.046 } 00:15:45.046 ] 00:15:45.046 }' 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.046 11:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 213a2f83-7052-475e-811d-bb4365c50ed0 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.614 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.614 [2024-11-20 11:27:53.457505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:45.873 NewBaseBdev 00:15:45.873 [2024-11-20 11:27:53.458041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:45.873 [2024-11-20 11:27:53.458081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:45.873 [2024-11-20 11:27:53.458414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:45.874 [2024-11-20 11:27:53.458632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:45.874 [2024-11-20 11:27:53.458650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:45.874 [2024-11-20 11:27:53.458816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.874 [ 00:15:45.874 { 00:15:45.874 "name": "NewBaseBdev", 00:15:45.874 "aliases": [ 00:15:45.874 "213a2f83-7052-475e-811d-bb4365c50ed0" 00:15:45.874 ], 00:15:45.874 "product_name": "Malloc disk", 00:15:45.874 "block_size": 512, 00:15:45.874 "num_blocks": 65536, 00:15:45.874 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:45.874 "assigned_rate_limits": { 00:15:45.874 "rw_ios_per_sec": 0, 00:15:45.874 "rw_mbytes_per_sec": 0, 00:15:45.874 "r_mbytes_per_sec": 0, 00:15:45.874 "w_mbytes_per_sec": 0 00:15:45.874 }, 00:15:45.874 "claimed": true, 00:15:45.874 "claim_type": "exclusive_write", 00:15:45.874 "zoned": false, 00:15:45.874 "supported_io_types": { 00:15:45.874 "read": true, 00:15:45.874 "write": true, 00:15:45.874 "unmap": true, 00:15:45.874 "flush": true, 00:15:45.874 "reset": true, 00:15:45.874 "nvme_admin": false, 00:15:45.874 "nvme_io": false, 00:15:45.874 "nvme_io_md": false, 00:15:45.874 "write_zeroes": true, 00:15:45.874 "zcopy": true, 00:15:45.874 "get_zone_info": false, 00:15:45.874 "zone_management": false, 00:15:45.874 "zone_append": false, 00:15:45.874 "compare": false, 00:15:45.874 "compare_and_write": false, 00:15:45.874 "abort": true, 00:15:45.874 "seek_hole": false, 00:15:45.874 "seek_data": false, 00:15:45.874 "copy": true, 00:15:45.874 "nvme_iov_md": false 00:15:45.874 }, 00:15:45.874 "memory_domains": [ 00:15:45.874 { 00:15:45.874 "dma_device_id": "system", 00:15:45.874 "dma_device_type": 1 00:15:45.874 }, 00:15:45.874 { 00:15:45.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.874 "dma_device_type": 2 00:15:45.874 } 00:15:45.874 ], 00:15:45.874 "driver_specific": {} 00:15:45.874 } 00:15:45.874 ] 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.874 "name": "Existed_Raid", 00:15:45.874 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:45.874 "strip_size_kb": 0, 00:15:45.874 "state": "online", 00:15:45.874 "raid_level": "raid1", 00:15:45.874 "superblock": true, 00:15:45.874 "num_base_bdevs": 4, 00:15:45.874 "num_base_bdevs_discovered": 4, 00:15:45.874 "num_base_bdevs_operational": 4, 00:15:45.874 "base_bdevs_list": [ 00:15:45.874 { 00:15:45.874 "name": "NewBaseBdev", 00:15:45.874 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:45.874 "is_configured": true, 00:15:45.874 "data_offset": 2048, 00:15:45.874 "data_size": 63488 00:15:45.874 }, 00:15:45.874 { 00:15:45.874 "name": "BaseBdev2", 00:15:45.874 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:45.874 "is_configured": true, 00:15:45.874 "data_offset": 2048, 00:15:45.874 "data_size": 63488 00:15:45.874 }, 00:15:45.874 { 00:15:45.874 "name": "BaseBdev3", 00:15:45.874 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:45.874 "is_configured": true, 00:15:45.874 "data_offset": 2048, 00:15:45.874 "data_size": 63488 00:15:45.874 }, 00:15:45.874 { 00:15:45.874 "name": "BaseBdev4", 00:15:45.874 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:45.874 "is_configured": true, 00:15:45.874 "data_offset": 2048, 00:15:45.874 "data_size": 63488 00:15:45.874 } 00:15:45.874 ] 00:15:45.874 }' 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.874 11:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.443 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.443 11:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.443 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.443 [2024-11-20 11:27:54.006158] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.443 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.443 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.443 "name": "Existed_Raid", 00:15:46.443 "aliases": [ 00:15:46.443 "f99172ac-9af3-4fdc-8040-3ea46c11e7aa" 00:15:46.443 ], 00:15:46.443 "product_name": "Raid Volume", 00:15:46.443 "block_size": 512, 00:15:46.443 "num_blocks": 63488, 00:15:46.443 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:46.443 "assigned_rate_limits": { 00:15:46.443 "rw_ios_per_sec": 0, 00:15:46.443 "rw_mbytes_per_sec": 0, 00:15:46.443 "r_mbytes_per_sec": 0, 00:15:46.443 "w_mbytes_per_sec": 0 00:15:46.443 }, 00:15:46.443 "claimed": false, 00:15:46.443 "zoned": false, 00:15:46.443 "supported_io_types": { 00:15:46.443 "read": true, 00:15:46.443 "write": true, 00:15:46.443 "unmap": false, 00:15:46.443 "flush": false, 00:15:46.443 "reset": true, 00:15:46.443 "nvme_admin": false, 00:15:46.443 "nvme_io": false, 00:15:46.443 "nvme_io_md": false, 00:15:46.443 "write_zeroes": true, 00:15:46.443 "zcopy": false, 00:15:46.443 "get_zone_info": false, 00:15:46.444 "zone_management": false, 00:15:46.444 "zone_append": false, 00:15:46.444 "compare": false, 00:15:46.444 "compare_and_write": false, 00:15:46.444 "abort": false, 00:15:46.444 "seek_hole": false, 00:15:46.444 "seek_data": false, 00:15:46.444 "copy": false, 00:15:46.444 "nvme_iov_md": false 00:15:46.444 }, 00:15:46.444 "memory_domains": [ 00:15:46.444 { 00:15:46.444 "dma_device_id": "system", 00:15:46.444 "dma_device_type": 1 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.444 "dma_device_type": 2 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "dma_device_id": "system", 00:15:46.444 "dma_device_type": 1 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.444 "dma_device_type": 2 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "dma_device_id": "system", 00:15:46.444 "dma_device_type": 1 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.444 "dma_device_type": 2 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "dma_device_id": "system", 00:15:46.444 "dma_device_type": 1 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.444 "dma_device_type": 2 00:15:46.444 } 00:15:46.444 ], 00:15:46.444 "driver_specific": { 00:15:46.444 "raid": { 00:15:46.444 "uuid": "f99172ac-9af3-4fdc-8040-3ea46c11e7aa", 00:15:46.444 "strip_size_kb": 0, 00:15:46.444 "state": "online", 00:15:46.444 "raid_level": "raid1", 00:15:46.444 "superblock": true, 00:15:46.444 "num_base_bdevs": 4, 00:15:46.444 "num_base_bdevs_discovered": 4, 00:15:46.444 "num_base_bdevs_operational": 4, 00:15:46.444 "base_bdevs_list": [ 00:15:46.444 { 00:15:46.444 "name": "NewBaseBdev", 00:15:46.444 "uuid": "213a2f83-7052-475e-811d-bb4365c50ed0", 00:15:46.444 "is_configured": true, 00:15:46.444 "data_offset": 2048, 00:15:46.444 "data_size": 63488 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "name": "BaseBdev2", 00:15:46.444 "uuid": "c4e1bea5-4717-45e9-8fdd-1cd5fb23e039", 00:15:46.444 "is_configured": true, 00:15:46.444 "data_offset": 2048, 00:15:46.444 "data_size": 63488 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "name": "BaseBdev3", 00:15:46.444 "uuid": "66491000-eee7-48bb-92e3-00f84d19651f", 00:15:46.444 "is_configured": true, 00:15:46.444 "data_offset": 2048, 00:15:46.444 "data_size": 63488 00:15:46.444 }, 00:15:46.444 { 00:15:46.444 "name": "BaseBdev4", 00:15:46.444 "uuid": "cbfb6ff9-0960-4a56-8fc9-269efb9168f6", 00:15:46.444 "is_configured": true, 00:15:46.444 "data_offset": 2048, 00:15:46.444 "data_size": 63488 00:15:46.444 } 00:15:46.444 ] 00:15:46.444 } 00:15:46.444 } 00:15:46.444 }' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:46.444 BaseBdev2 00:15:46.444 BaseBdev3 00:15:46.444 BaseBdev4' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.444 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.703 [2024-11-20 11:27:54.357812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.703 [2024-11-20 11:27:54.357975] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.703 [2024-11-20 11:27:54.358187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.703 [2024-11-20 11:27:54.358575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.703 [2024-11-20 11:27:54.358599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73923 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73923 ']' 00:15:46.703 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73923 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73923 00:15:46.704 killing process with pid 73923 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73923' 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73923 00:15:46.704 [2024-11-20 11:27:54.395189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.704 11:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73923 00:15:47.277 [2024-11-20 11:27:54.832677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.214 ************************************ 00:15:48.214 END TEST raid_state_function_test_sb 00:15:48.214 ************************************ 00:15:48.214 11:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:48.214 00:15:48.214 real 0m12.720s 00:15:48.214 user 0m20.972s 00:15:48.214 sys 0m1.787s 00:15:48.214 11:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.214 11:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.214 11:27:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:48.214 11:27:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:48.214 11:27:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.214 11:27:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.214 ************************************ 00:15:48.214 START TEST raid_superblock_test 00:15:48.214 ************************************ 00:15:48.214 11:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:48.214 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:48.214 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:48.214 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:48.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74605 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74605 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74605 ']' 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.215 11:27:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.215 [2024-11-20 11:27:56.054842] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:15:48.215 [2024-11-20 11:27:56.055020] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74605 ] 00:15:48.474 [2024-11-20 11:27:56.245512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.733 [2024-11-20 11:27:56.403433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.992 [2024-11-20 11:27:56.612447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.992 [2024-11-20 11:27:56.612517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.251 malloc1 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.251 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.251 [2024-11-20 11:27:57.090711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.251 [2024-11-20 11:27:57.090926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.251 [2024-11-20 11:27:57.091003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:49.251 [2024-11-20 11:27:57.091127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.251 [2024-11-20 11:27:57.093909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.251 [2024-11-20 11:27:57.094087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.251 pt1 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.510 malloc2 00:15:49.510 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 [2024-11-20 11:27:57.146157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.511 [2024-11-20 11:27:57.146342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.511 [2024-11-20 11:27:57.146417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:49.511 [2024-11-20 11:27:57.146538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.511 [2024-11-20 11:27:57.149295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.511 [2024-11-20 11:27:57.149444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.511 pt2 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 malloc3 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 [2024-11-20 11:27:57.220800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:49.511 [2024-11-20 11:27:57.220977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.511 [2024-11-20 11:27:57.221054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:49.511 [2024-11-20 11:27:57.221159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.511 [2024-11-20 11:27:57.224004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.511 [2024-11-20 11:27:57.224048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:49.511 pt3 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 malloc4 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 [2024-11-20 11:27:57.273136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:49.511 [2024-11-20 11:27:57.273358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.511 [2024-11-20 11:27:57.273422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:49.511 [2024-11-20 11:27:57.273448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.511 [2024-11-20 11:27:57.276300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.511 [2024-11-20 11:27:57.276347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:49.511 pt4 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 [2024-11-20 11:27:57.281239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.511 [2024-11-20 11:27:57.283761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.511 [2024-11-20 11:27:57.283970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:49.511 [2024-11-20 11:27:57.284083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:49.511 [2024-11-20 11:27:57.284448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:49.511 [2024-11-20 11:27:57.284568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:49.511 [2024-11-20 11:27:57.284992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:49.511 [2024-11-20 11:27:57.285346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:49.511 [2024-11-20 11:27:57.285376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:49.511 [2024-11-20 11:27:57.285605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.511 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.511 "name": "raid_bdev1", 00:15:49.511 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:49.511 "strip_size_kb": 0, 00:15:49.511 "state": "online", 00:15:49.511 "raid_level": "raid1", 00:15:49.511 "superblock": true, 00:15:49.511 "num_base_bdevs": 4, 00:15:49.511 "num_base_bdevs_discovered": 4, 00:15:49.511 "num_base_bdevs_operational": 4, 00:15:49.511 "base_bdevs_list": [ 00:15:49.511 { 00:15:49.511 "name": "pt1", 00:15:49.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.511 "is_configured": true, 00:15:49.511 "data_offset": 2048, 00:15:49.511 "data_size": 63488 00:15:49.511 }, 00:15:49.511 { 00:15:49.511 "name": "pt2", 00:15:49.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.512 "is_configured": true, 00:15:49.512 "data_offset": 2048, 00:15:49.512 "data_size": 63488 00:15:49.512 }, 00:15:49.512 { 00:15:49.512 "name": "pt3", 00:15:49.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.512 "is_configured": true, 00:15:49.512 "data_offset": 2048, 00:15:49.512 "data_size": 63488 00:15:49.512 }, 00:15:49.512 { 00:15:49.512 "name": "pt4", 00:15:49.512 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.512 "is_configured": true, 00:15:49.512 "data_offset": 2048, 00:15:49.512 "data_size": 63488 00:15:49.512 } 00:15:49.512 ] 00:15:49.512 }' 00:15:49.512 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.512 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.078 [2024-11-20 11:27:57.798145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.078 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.078 "name": "raid_bdev1", 00:15:50.078 "aliases": [ 00:15:50.078 "aa57b351-c8c8-4b4a-87fd-db24508d14cb" 00:15:50.078 ], 00:15:50.078 "product_name": "Raid Volume", 00:15:50.078 "block_size": 512, 00:15:50.078 "num_blocks": 63488, 00:15:50.078 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:50.078 "assigned_rate_limits": { 00:15:50.078 "rw_ios_per_sec": 0, 00:15:50.078 "rw_mbytes_per_sec": 0, 00:15:50.078 "r_mbytes_per_sec": 0, 00:15:50.078 "w_mbytes_per_sec": 0 00:15:50.078 }, 00:15:50.078 "claimed": false, 00:15:50.078 "zoned": false, 00:15:50.078 "supported_io_types": { 00:15:50.078 "read": true, 00:15:50.078 "write": true, 00:15:50.078 "unmap": false, 00:15:50.078 "flush": false, 00:15:50.078 "reset": true, 00:15:50.078 "nvme_admin": false, 00:15:50.078 "nvme_io": false, 00:15:50.078 "nvme_io_md": false, 00:15:50.078 "write_zeroes": true, 00:15:50.078 "zcopy": false, 00:15:50.078 "get_zone_info": false, 00:15:50.078 "zone_management": false, 00:15:50.078 "zone_append": false, 00:15:50.078 "compare": false, 00:15:50.078 "compare_and_write": false, 00:15:50.078 "abort": false, 00:15:50.078 "seek_hole": false, 00:15:50.078 "seek_data": false, 00:15:50.078 "copy": false, 00:15:50.078 "nvme_iov_md": false 00:15:50.078 }, 00:15:50.078 "memory_domains": [ 00:15:50.078 { 00:15:50.078 "dma_device_id": "system", 00:15:50.078 "dma_device_type": 1 00:15:50.078 }, 00:15:50.078 { 00:15:50.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.078 "dma_device_type": 2 00:15:50.078 }, 00:15:50.078 { 00:15:50.078 "dma_device_id": "system", 00:15:50.078 "dma_device_type": 1 00:15:50.078 }, 00:15:50.078 { 00:15:50.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.078 "dma_device_type": 2 00:15:50.078 }, 00:15:50.078 { 00:15:50.078 "dma_device_id": "system", 00:15:50.078 "dma_device_type": 1 00:15:50.078 }, 00:15:50.078 { 00:15:50.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.078 "dma_device_type": 2 00:15:50.079 }, 00:15:50.079 { 00:15:50.079 "dma_device_id": "system", 00:15:50.079 "dma_device_type": 1 00:15:50.079 }, 00:15:50.079 { 00:15:50.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.079 "dma_device_type": 2 00:15:50.079 } 00:15:50.079 ], 00:15:50.079 "driver_specific": { 00:15:50.079 "raid": { 00:15:50.079 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:50.079 "strip_size_kb": 0, 00:15:50.079 "state": "online", 00:15:50.079 "raid_level": "raid1", 00:15:50.079 "superblock": true, 00:15:50.079 "num_base_bdevs": 4, 00:15:50.079 "num_base_bdevs_discovered": 4, 00:15:50.079 "num_base_bdevs_operational": 4, 00:15:50.079 "base_bdevs_list": [ 00:15:50.079 { 00:15:50.079 "name": "pt1", 00:15:50.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.079 "is_configured": true, 00:15:50.079 "data_offset": 2048, 00:15:50.079 "data_size": 63488 00:15:50.079 }, 00:15:50.079 { 00:15:50.079 "name": "pt2", 00:15:50.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.079 "is_configured": true, 00:15:50.079 "data_offset": 2048, 00:15:50.079 "data_size": 63488 00:15:50.079 }, 00:15:50.079 { 00:15:50.079 "name": "pt3", 00:15:50.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.079 "is_configured": true, 00:15:50.079 "data_offset": 2048, 00:15:50.079 "data_size": 63488 00:15:50.079 }, 00:15:50.079 { 00:15:50.079 "name": "pt4", 00:15:50.079 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.079 "is_configured": true, 00:15:50.079 "data_offset": 2048, 00:15:50.079 "data_size": 63488 00:15:50.079 } 00:15:50.079 ] 00:15:50.079 } 00:15:50.079 } 00:15:50.079 }' 00:15:50.079 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.079 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.079 pt2 00:15:50.079 pt3 00:15:50.079 pt4' 00:15:50.079 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.337 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.338 11:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.338 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.338 [2024-11-20 11:27:58.162171] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aa57b351-c8c8-4b4a-87fd-db24508d14cb 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aa57b351-c8c8-4b4a-87fd-db24508d14cb ']' 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.597 [2024-11-20 11:27:58.205809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.597 [2024-11-20 11:27:58.205959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.597 [2024-11-20 11:27:58.206159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.597 [2024-11-20 11:27:58.206374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.597 [2024-11-20 11:27:58.206525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.597 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 [2024-11-20 11:27:58.357950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:50.598 [2024-11-20 11:27:58.361440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:50.598 [2024-11-20 11:27:58.361554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:50.598 [2024-11-20 11:27:58.361677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:50.598 [2024-11-20 11:27:58.361788] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:50.598 [2024-11-20 11:27:58.361890] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:50.598 [2024-11-20 11:27:58.361957] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:50.598 [2024-11-20 11:27:58.362013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:50.598 [2024-11-20 11:27:58.362047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.598 [2024-11-20 11:27:58.362074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:50.598 request: 00:15:50.598 { 00:15:50.598 "name": "raid_bdev1", 00:15:50.598 "raid_level": "raid1", 00:15:50.598 "base_bdevs": [ 00:15:50.598 "malloc1", 00:15:50.598 "malloc2", 00:15:50.598 "malloc3", 00:15:50.598 "malloc4" 00:15:50.598 ], 00:15:50.598 "superblock": false, 00:15:50.598 "method": "bdev_raid_create", 00:15:50.598 "req_id": 1 00:15:50.598 } 00:15:50.598 Got JSON-RPC error response 00:15:50.598 response: 00:15:50.598 { 00:15:50.598 "code": -17, 00:15:50.598 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:50.598 } 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.598 [2024-11-20 11:27:58.421920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.598 [2024-11-20 11:27:58.422125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.598 [2024-11-20 11:27:58.422195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:50.598 [2024-11-20 11:27:58.422409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.598 [2024-11-20 11:27:58.425472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.598 [2024-11-20 11:27:58.425681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.598 [2024-11-20 11:27:58.425887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:50.598 [2024-11-20 11:27:58.426075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.598 pt1 00:15:50.598 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.599 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.858 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.858 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.858 "name": "raid_bdev1", 00:15:50.858 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:50.858 "strip_size_kb": 0, 00:15:50.858 "state": "configuring", 00:15:50.858 "raid_level": "raid1", 00:15:50.858 "superblock": true, 00:15:50.858 "num_base_bdevs": 4, 00:15:50.858 "num_base_bdevs_discovered": 1, 00:15:50.858 "num_base_bdevs_operational": 4, 00:15:50.858 "base_bdevs_list": [ 00:15:50.858 { 00:15:50.858 "name": "pt1", 00:15:50.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.858 "is_configured": true, 00:15:50.858 "data_offset": 2048, 00:15:50.858 "data_size": 63488 00:15:50.858 }, 00:15:50.858 { 00:15:50.858 "name": null, 00:15:50.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.858 "is_configured": false, 00:15:50.858 "data_offset": 2048, 00:15:50.858 "data_size": 63488 00:15:50.858 }, 00:15:50.858 { 00:15:50.858 "name": null, 00:15:50.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.858 "is_configured": false, 00:15:50.858 "data_offset": 2048, 00:15:50.858 "data_size": 63488 00:15:50.858 }, 00:15:50.858 { 00:15:50.858 "name": null, 00:15:50.858 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.858 "is_configured": false, 00:15:50.858 "data_offset": 2048, 00:15:50.858 "data_size": 63488 00:15:50.858 } 00:15:50.858 ] 00:15:50.858 }' 00:15:50.858 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.858 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.117 [2024-11-20 11:27:58.946200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.117 [2024-11-20 11:27:58.946408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.117 [2024-11-20 11:27:58.946480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:51.117 [2024-11-20 11:27:58.946599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.117 [2024-11-20 11:27:58.947197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.117 [2024-11-20 11:27:58.947227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.117 [2024-11-20 11:27:58.947325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.117 [2024-11-20 11:27:58.947369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.117 pt2 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.117 [2024-11-20 11:27:58.954175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.117 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.457 11:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.457 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.457 "name": "raid_bdev1", 00:15:51.457 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:51.457 "strip_size_kb": 0, 00:15:51.457 "state": "configuring", 00:15:51.457 "raid_level": "raid1", 00:15:51.457 "superblock": true, 00:15:51.457 "num_base_bdevs": 4, 00:15:51.457 "num_base_bdevs_discovered": 1, 00:15:51.457 "num_base_bdevs_operational": 4, 00:15:51.457 "base_bdevs_list": [ 00:15:51.457 { 00:15:51.457 "name": "pt1", 00:15:51.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.457 "is_configured": true, 00:15:51.457 "data_offset": 2048, 00:15:51.457 "data_size": 63488 00:15:51.457 }, 00:15:51.457 { 00:15:51.457 "name": null, 00:15:51.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.457 "is_configured": false, 00:15:51.457 "data_offset": 0, 00:15:51.457 "data_size": 63488 00:15:51.457 }, 00:15:51.457 { 00:15:51.457 "name": null, 00:15:51.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.457 "is_configured": false, 00:15:51.457 "data_offset": 2048, 00:15:51.457 "data_size": 63488 00:15:51.457 }, 00:15:51.457 { 00:15:51.457 "name": null, 00:15:51.457 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.457 "is_configured": false, 00:15:51.457 "data_offset": 2048, 00:15:51.457 "data_size": 63488 00:15:51.458 } 00:15:51.458 ] 00:15:51.458 }' 00:15:51.458 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.458 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.768 [2024-11-20 11:27:59.482328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.768 [2024-11-20 11:27:59.482538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.768 [2024-11-20 11:27:59.482635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:51.768 [2024-11-20 11:27:59.482843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.768 [2024-11-20 11:27:59.483439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.768 [2024-11-20 11:27:59.483587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.768 [2024-11-20 11:27:59.483725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.768 [2024-11-20 11:27:59.483760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.768 pt2 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.768 [2024-11-20 11:27:59.490291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.768 [2024-11-20 11:27:59.490475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.768 [2024-11-20 11:27:59.490544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:51.768 [2024-11-20 11:27:59.490671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.768 [2024-11-20 11:27:59.491255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.768 [2024-11-20 11:27:59.491396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.768 [2024-11-20 11:27:59.491584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:51.768 [2024-11-20 11:27:59.491726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.768 pt3 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.768 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.768 [2024-11-20 11:27:59.498277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:51.768 [2024-11-20 11:27:59.498447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.768 [2024-11-20 11:27:59.498571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:51.768 [2024-11-20 11:27:59.498595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.768 [2024-11-20 11:27:59.499059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.769 [2024-11-20 11:27:59.499094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:51.769 [2024-11-20 11:27:59.499207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:51.769 [2024-11-20 11:27:59.499236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:51.769 [2024-11-20 11:27:59.499418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.769 [2024-11-20 11:27:59.499434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.769 [2024-11-20 11:27:59.499763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:51.769 [2024-11-20 11:27:59.500043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.769 [2024-11-20 11:27:59.500087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:51.769 [2024-11-20 11:27:59.500256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.769 pt4 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.769 "name": "raid_bdev1", 00:15:51.769 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:51.769 "strip_size_kb": 0, 00:15:51.769 "state": "online", 00:15:51.769 "raid_level": "raid1", 00:15:51.769 "superblock": true, 00:15:51.769 "num_base_bdevs": 4, 00:15:51.769 "num_base_bdevs_discovered": 4, 00:15:51.769 "num_base_bdevs_operational": 4, 00:15:51.769 "base_bdevs_list": [ 00:15:51.769 { 00:15:51.769 "name": "pt1", 00:15:51.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 }, 00:15:51.769 { 00:15:51.769 "name": "pt2", 00:15:51.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 }, 00:15:51.769 { 00:15:51.769 "name": "pt3", 00:15:51.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 }, 00:15:51.769 { 00:15:51.769 "name": "pt4", 00:15:51.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.769 "is_configured": true, 00:15:51.769 "data_offset": 2048, 00:15:51.769 "data_size": 63488 00:15:51.769 } 00:15:51.769 ] 00:15:51.769 }' 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.769 11:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.335 [2024-11-20 11:28:00.018895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.335 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.335 "name": "raid_bdev1", 00:15:52.335 "aliases": [ 00:15:52.335 "aa57b351-c8c8-4b4a-87fd-db24508d14cb" 00:15:52.335 ], 00:15:52.335 "product_name": "Raid Volume", 00:15:52.335 "block_size": 512, 00:15:52.335 "num_blocks": 63488, 00:15:52.335 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:52.335 "assigned_rate_limits": { 00:15:52.335 "rw_ios_per_sec": 0, 00:15:52.335 "rw_mbytes_per_sec": 0, 00:15:52.335 "r_mbytes_per_sec": 0, 00:15:52.335 "w_mbytes_per_sec": 0 00:15:52.335 }, 00:15:52.335 "claimed": false, 00:15:52.335 "zoned": false, 00:15:52.335 "supported_io_types": { 00:15:52.335 "read": true, 00:15:52.335 "write": true, 00:15:52.335 "unmap": false, 00:15:52.335 "flush": false, 00:15:52.335 "reset": true, 00:15:52.335 "nvme_admin": false, 00:15:52.335 "nvme_io": false, 00:15:52.335 "nvme_io_md": false, 00:15:52.335 "write_zeroes": true, 00:15:52.335 "zcopy": false, 00:15:52.335 "get_zone_info": false, 00:15:52.335 "zone_management": false, 00:15:52.335 "zone_append": false, 00:15:52.335 "compare": false, 00:15:52.335 "compare_and_write": false, 00:15:52.335 "abort": false, 00:15:52.335 "seek_hole": false, 00:15:52.335 "seek_data": false, 00:15:52.335 "copy": false, 00:15:52.335 "nvme_iov_md": false 00:15:52.335 }, 00:15:52.336 "memory_domains": [ 00:15:52.336 { 00:15:52.336 "dma_device_id": "system", 00:15:52.336 "dma_device_type": 1 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.336 "dma_device_type": 2 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "dma_device_id": "system", 00:15:52.336 "dma_device_type": 1 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.336 "dma_device_type": 2 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "dma_device_id": "system", 00:15:52.336 "dma_device_type": 1 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.336 "dma_device_type": 2 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "dma_device_id": "system", 00:15:52.336 "dma_device_type": 1 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.336 "dma_device_type": 2 00:15:52.336 } 00:15:52.336 ], 00:15:52.336 "driver_specific": { 00:15:52.336 "raid": { 00:15:52.336 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:52.336 "strip_size_kb": 0, 00:15:52.336 "state": "online", 00:15:52.336 "raid_level": "raid1", 00:15:52.336 "superblock": true, 00:15:52.336 "num_base_bdevs": 4, 00:15:52.336 "num_base_bdevs_discovered": 4, 00:15:52.336 "num_base_bdevs_operational": 4, 00:15:52.336 "base_bdevs_list": [ 00:15:52.336 { 00:15:52.336 "name": "pt1", 00:15:52.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.336 "is_configured": true, 00:15:52.336 "data_offset": 2048, 00:15:52.336 "data_size": 63488 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "name": "pt2", 00:15:52.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.336 "is_configured": true, 00:15:52.336 "data_offset": 2048, 00:15:52.336 "data_size": 63488 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "name": "pt3", 00:15:52.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.336 "is_configured": true, 00:15:52.336 "data_offset": 2048, 00:15:52.336 "data_size": 63488 00:15:52.336 }, 00:15:52.336 { 00:15:52.336 "name": "pt4", 00:15:52.336 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.336 "is_configured": true, 00:15:52.336 "data_offset": 2048, 00:15:52.336 "data_size": 63488 00:15:52.336 } 00:15:52.336 ] 00:15:52.336 } 00:15:52.336 } 00:15:52.336 }' 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:52.336 pt2 00:15:52.336 pt3 00:15:52.336 pt4' 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.336 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.595 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.595 [2024-11-20 11:28:00.422998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.853 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.853 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aa57b351-c8c8-4b4a-87fd-db24508d14cb '!=' aa57b351-c8c8-4b4a-87fd-db24508d14cb ']' 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.854 [2024-11-20 11:28:00.482758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.854 "name": "raid_bdev1", 00:15:52.854 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:52.854 "strip_size_kb": 0, 00:15:52.854 "state": "online", 00:15:52.854 "raid_level": "raid1", 00:15:52.854 "superblock": true, 00:15:52.854 "num_base_bdevs": 4, 00:15:52.854 "num_base_bdevs_discovered": 3, 00:15:52.854 "num_base_bdevs_operational": 3, 00:15:52.854 "base_bdevs_list": [ 00:15:52.854 { 00:15:52.854 "name": null, 00:15:52.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.854 "is_configured": false, 00:15:52.854 "data_offset": 0, 00:15:52.854 "data_size": 63488 00:15:52.854 }, 00:15:52.854 { 00:15:52.854 "name": "pt2", 00:15:52.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.854 "is_configured": true, 00:15:52.854 "data_offset": 2048, 00:15:52.854 "data_size": 63488 00:15:52.854 }, 00:15:52.854 { 00:15:52.854 "name": "pt3", 00:15:52.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.854 "is_configured": true, 00:15:52.854 "data_offset": 2048, 00:15:52.854 "data_size": 63488 00:15:52.854 }, 00:15:52.854 { 00:15:52.854 "name": "pt4", 00:15:52.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.854 "is_configured": true, 00:15:52.854 "data_offset": 2048, 00:15:52.854 "data_size": 63488 00:15:52.854 } 00:15:52.854 ] 00:15:52.854 }' 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.854 11:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.421 [2024-11-20 11:28:01.030752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.421 [2024-11-20 11:28:01.030971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.421 [2024-11-20 11:28:01.031088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.421 [2024-11-20 11:28:01.031195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.421 [2024-11-20 11:28:01.031212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.421 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.422 [2024-11-20 11:28:01.118760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.422 [2024-11-20 11:28:01.118830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.422 [2024-11-20 11:28:01.118859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:53.422 [2024-11-20 11:28:01.118875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.422 [2024-11-20 11:28:01.121775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.422 [2024-11-20 11:28:01.121955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.422 [2024-11-20 11:28:01.122101] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:53.422 [2024-11-20 11:28:01.122166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.422 pt2 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.422 "name": "raid_bdev1", 00:15:53.422 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:53.422 "strip_size_kb": 0, 00:15:53.422 "state": "configuring", 00:15:53.422 "raid_level": "raid1", 00:15:53.422 "superblock": true, 00:15:53.422 "num_base_bdevs": 4, 00:15:53.422 "num_base_bdevs_discovered": 1, 00:15:53.422 "num_base_bdevs_operational": 3, 00:15:53.422 "base_bdevs_list": [ 00:15:53.422 { 00:15:53.422 "name": null, 00:15:53.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.422 "is_configured": false, 00:15:53.422 "data_offset": 2048, 00:15:53.422 "data_size": 63488 00:15:53.422 }, 00:15:53.422 { 00:15:53.422 "name": "pt2", 00:15:53.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.422 "is_configured": true, 00:15:53.422 "data_offset": 2048, 00:15:53.422 "data_size": 63488 00:15:53.422 }, 00:15:53.422 { 00:15:53.422 "name": null, 00:15:53.422 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.422 "is_configured": false, 00:15:53.422 "data_offset": 2048, 00:15:53.422 "data_size": 63488 00:15:53.422 }, 00:15:53.422 { 00:15:53.422 "name": null, 00:15:53.422 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.422 "is_configured": false, 00:15:53.422 "data_offset": 2048, 00:15:53.422 "data_size": 63488 00:15:53.422 } 00:15:53.422 ] 00:15:53.422 }' 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.422 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.989 [2024-11-20 11:28:01.650941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:53.989 [2024-11-20 11:28:01.651209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.989 [2024-11-20 11:28:01.651286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:53.989 [2024-11-20 11:28:01.651575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.989 [2024-11-20 11:28:01.652166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.989 [2024-11-20 11:28:01.652199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:53.989 [2024-11-20 11:28:01.652308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:53.989 [2024-11-20 11:28:01.652339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:53.989 pt3 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.989 "name": "raid_bdev1", 00:15:53.989 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:53.989 "strip_size_kb": 0, 00:15:53.989 "state": "configuring", 00:15:53.989 "raid_level": "raid1", 00:15:53.989 "superblock": true, 00:15:53.989 "num_base_bdevs": 4, 00:15:53.989 "num_base_bdevs_discovered": 2, 00:15:53.989 "num_base_bdevs_operational": 3, 00:15:53.989 "base_bdevs_list": [ 00:15:53.989 { 00:15:53.989 "name": null, 00:15:53.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.989 "is_configured": false, 00:15:53.989 "data_offset": 2048, 00:15:53.989 "data_size": 63488 00:15:53.989 }, 00:15:53.989 { 00:15:53.989 "name": "pt2", 00:15:53.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.989 "is_configured": true, 00:15:53.989 "data_offset": 2048, 00:15:53.989 "data_size": 63488 00:15:53.989 }, 00:15:53.989 { 00:15:53.989 "name": "pt3", 00:15:53.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.989 "is_configured": true, 00:15:53.989 "data_offset": 2048, 00:15:53.989 "data_size": 63488 00:15:53.989 }, 00:15:53.989 { 00:15:53.989 "name": null, 00:15:53.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.989 "is_configured": false, 00:15:53.989 "data_offset": 2048, 00:15:53.989 "data_size": 63488 00:15:53.989 } 00:15:53.989 ] 00:15:53.989 }' 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.989 11:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.555 [2024-11-20 11:28:02.135110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:54.555 [2024-11-20 11:28:02.135327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.555 [2024-11-20 11:28:02.135405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:54.555 [2024-11-20 11:28:02.135587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.555 [2024-11-20 11:28:02.136232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.555 [2024-11-20 11:28:02.136386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:54.555 [2024-11-20 11:28:02.136508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:54.555 [2024-11-20 11:28:02.136550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:54.555 [2024-11-20 11:28:02.136757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:54.555 [2024-11-20 11:28:02.136774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:54.555 [2024-11-20 11:28:02.137087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:54.555 [2024-11-20 11:28:02.137284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:54.555 [2024-11-20 11:28:02.137304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:54.555 [2024-11-20 11:28:02.137469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.555 pt4 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.555 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.556 "name": "raid_bdev1", 00:15:54.556 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:54.556 "strip_size_kb": 0, 00:15:54.556 "state": "online", 00:15:54.556 "raid_level": "raid1", 00:15:54.556 "superblock": true, 00:15:54.556 "num_base_bdevs": 4, 00:15:54.556 "num_base_bdevs_discovered": 3, 00:15:54.556 "num_base_bdevs_operational": 3, 00:15:54.556 "base_bdevs_list": [ 00:15:54.556 { 00:15:54.556 "name": null, 00:15:54.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.556 "is_configured": false, 00:15:54.556 "data_offset": 2048, 00:15:54.556 "data_size": 63488 00:15:54.556 }, 00:15:54.556 { 00:15:54.556 "name": "pt2", 00:15:54.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.556 "is_configured": true, 00:15:54.556 "data_offset": 2048, 00:15:54.556 "data_size": 63488 00:15:54.556 }, 00:15:54.556 { 00:15:54.556 "name": "pt3", 00:15:54.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:54.556 "is_configured": true, 00:15:54.556 "data_offset": 2048, 00:15:54.556 "data_size": 63488 00:15:54.556 }, 00:15:54.556 { 00:15:54.556 "name": "pt4", 00:15:54.556 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:54.556 "is_configured": true, 00:15:54.556 "data_offset": 2048, 00:15:54.556 "data_size": 63488 00:15:54.556 } 00:15:54.556 ] 00:15:54.556 }' 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.556 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.814 [2024-11-20 11:28:02.623164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.814 [2024-11-20 11:28:02.623328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.814 [2024-11-20 11:28:02.623555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.814 [2024-11-20 11:28:02.623781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.814 [2024-11-20 11:28:02.623944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.814 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.073 [2024-11-20 11:28:02.719198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.073 [2024-11-20 11:28:02.719291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.073 [2024-11-20 11:28:02.719318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:55.073 [2024-11-20 11:28:02.719339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.073 [2024-11-20 11:28:02.722290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.073 [2024-11-20 11:28:02.722457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.073 [2024-11-20 11:28:02.722588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.073 [2024-11-20 11:28:02.722672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.073 [2024-11-20 11:28:02.722850] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:55.073 [2024-11-20 11:28:02.722874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.073 [2024-11-20 11:28:02.722896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:55.073 [2024-11-20 11:28:02.722981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.073 pt1 00:15:55.073 [2024-11-20 11:28:02.723144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.073 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.074 "name": "raid_bdev1", 00:15:55.074 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:55.074 "strip_size_kb": 0, 00:15:55.074 "state": "configuring", 00:15:55.074 "raid_level": "raid1", 00:15:55.074 "superblock": true, 00:15:55.074 "num_base_bdevs": 4, 00:15:55.074 "num_base_bdevs_discovered": 2, 00:15:55.074 "num_base_bdevs_operational": 3, 00:15:55.074 "base_bdevs_list": [ 00:15:55.074 { 00:15:55.074 "name": null, 00:15:55.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.074 "is_configured": false, 00:15:55.074 "data_offset": 2048, 00:15:55.074 "data_size": 63488 00:15:55.074 }, 00:15:55.074 { 00:15:55.074 "name": "pt2", 00:15:55.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.074 "is_configured": true, 00:15:55.074 "data_offset": 2048, 00:15:55.074 "data_size": 63488 00:15:55.074 }, 00:15:55.074 { 00:15:55.074 "name": "pt3", 00:15:55.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.074 "is_configured": true, 00:15:55.074 "data_offset": 2048, 00:15:55.074 "data_size": 63488 00:15:55.074 }, 00:15:55.074 { 00:15:55.074 "name": null, 00:15:55.074 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:55.074 "is_configured": false, 00:15:55.074 "data_offset": 2048, 00:15:55.074 "data_size": 63488 00:15:55.074 } 00:15:55.074 ] 00:15:55.074 }' 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.074 11:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.641 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:55.641 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.641 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.642 [2024-11-20 11:28:03.315383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:55.642 [2024-11-20 11:28:03.315582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.642 [2024-11-20 11:28:03.315640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:55.642 [2024-11-20 11:28:03.315659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.642 [2024-11-20 11:28:03.316196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.642 [2024-11-20 11:28:03.316221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:55.642 [2024-11-20 11:28:03.316338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:55.642 [2024-11-20 11:28:03.316378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:55.642 [2024-11-20 11:28:03.316546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:55.642 [2024-11-20 11:28:03.316563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.642 [2024-11-20 11:28:03.317025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:55.642 [2024-11-20 11:28:03.317331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:55.642 [2024-11-20 11:28:03.317472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:55.642 [2024-11-20 11:28:03.317795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.642 pt4 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.642 "name": "raid_bdev1", 00:15:55.642 "uuid": "aa57b351-c8c8-4b4a-87fd-db24508d14cb", 00:15:55.642 "strip_size_kb": 0, 00:15:55.642 "state": "online", 00:15:55.642 "raid_level": "raid1", 00:15:55.642 "superblock": true, 00:15:55.642 "num_base_bdevs": 4, 00:15:55.642 "num_base_bdevs_discovered": 3, 00:15:55.642 "num_base_bdevs_operational": 3, 00:15:55.642 "base_bdevs_list": [ 00:15:55.642 { 00:15:55.642 "name": null, 00:15:55.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.642 "is_configured": false, 00:15:55.642 "data_offset": 2048, 00:15:55.642 "data_size": 63488 00:15:55.642 }, 00:15:55.642 { 00:15:55.642 "name": "pt2", 00:15:55.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.642 "is_configured": true, 00:15:55.642 "data_offset": 2048, 00:15:55.642 "data_size": 63488 00:15:55.642 }, 00:15:55.642 { 00:15:55.642 "name": "pt3", 00:15:55.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.642 "is_configured": true, 00:15:55.642 "data_offset": 2048, 00:15:55.642 "data_size": 63488 00:15:55.642 }, 00:15:55.642 { 00:15:55.642 "name": "pt4", 00:15:55.642 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:55.642 "is_configured": true, 00:15:55.642 "data_offset": 2048, 00:15:55.642 "data_size": 63488 00:15:55.642 } 00:15:55.642 ] 00:15:55.642 }' 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.642 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.211 [2024-11-20 11:28:03.831894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' aa57b351-c8c8-4b4a-87fd-db24508d14cb '!=' aa57b351-c8c8-4b4a-87fd-db24508d14cb ']' 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74605 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74605 ']' 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74605 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74605 00:15:56.211 killing process with pid 74605 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74605' 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74605 00:15:56.211 [2024-11-20 11:28:03.899225] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.211 11:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74605 00:15:56.211 [2024-11-20 11:28:03.899338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.211 [2024-11-20 11:28:03.899438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.211 [2024-11-20 11:28:03.899458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:56.470 [2024-11-20 11:28:04.254245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.849 ************************************ 00:15:57.849 END TEST raid_superblock_test 00:15:57.849 ************************************ 00:15:57.849 11:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:57.849 00:15:57.849 real 0m9.344s 00:15:57.849 user 0m15.393s 00:15:57.849 sys 0m1.332s 00:15:57.849 11:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.849 11:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.849 11:28:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:57.849 11:28:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:57.849 11:28:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.849 11:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.849 ************************************ 00:15:57.849 START TEST raid_read_error_test 00:15:57.849 ************************************ 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.H7zfU4dHA4 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75103 00:15:57.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75103 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75103 ']' 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.849 11:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.849 [2024-11-20 11:28:05.458224] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:15:57.849 [2024-11-20 11:28:05.458607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75103 ] 00:15:57.849 [2024-11-20 11:28:05.635738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.109 [2024-11-20 11:28:05.765177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.368 [2024-11-20 11:28:05.969073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.368 [2024-11-20 11:28:05.969404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 BaseBdev1_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 true 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 [2024-11-20 11:28:06.563517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:58.942 [2024-11-20 11:28:06.563741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.942 [2024-11-20 11:28:06.563782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:58.942 [2024-11-20 11:28:06.563802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.942 [2024-11-20 11:28:06.566644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.942 [2024-11-20 11:28:06.566692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.942 BaseBdev1 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 BaseBdev2_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 true 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 [2024-11-20 11:28:06.627427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:58.942 [2024-11-20 11:28:06.627635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.942 [2024-11-20 11:28:06.627672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:58.942 [2024-11-20 11:28:06.627690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.942 [2024-11-20 11:28:06.630461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.942 [2024-11-20 11:28:06.630631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:58.942 BaseBdev2 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 BaseBdev3_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 true 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.942 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.942 [2024-11-20 11:28:06.697496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:58.942 [2024-11-20 11:28:06.697571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.942 [2024-11-20 11:28:06.697598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:58.943 [2024-11-20 11:28:06.697627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.943 [2024-11-20 11:28:06.700379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.943 BaseBdev3 00:15:58.943 [2024-11-20 11:28:06.700553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.943 BaseBdev4_malloc 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.943 true 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.943 [2024-11-20 11:28:06.757118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:58.943 [2024-11-20 11:28:06.757303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.943 [2024-11-20 11:28:06.757341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:58.943 [2024-11-20 11:28:06.757360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.943 [2024-11-20 11:28:06.760103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.943 [2024-11-20 11:28:06.760155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:58.943 BaseBdev4 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.943 [2024-11-20 11:28:06.765194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.943 [2024-11-20 11:28:06.767726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.943 [2024-11-20 11:28:06.767956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.943 [2024-11-20 11:28:06.768226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.943 [2024-11-20 11:28:06.768559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:58.943 [2024-11-20 11:28:06.768584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:58.943 [2024-11-20 11:28:06.768947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:58.943 [2024-11-20 11:28:06.769176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:58.943 [2024-11-20 11:28:06.769193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:58.943 [2024-11-20 11:28:06.769453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.943 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.201 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.201 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.201 "name": "raid_bdev1", 00:15:59.201 "uuid": "1f5408a2-7f9b-4f8f-be3a-2977583b2c98", 00:15:59.201 "strip_size_kb": 0, 00:15:59.201 "state": "online", 00:15:59.201 "raid_level": "raid1", 00:15:59.201 "superblock": true, 00:15:59.201 "num_base_bdevs": 4, 00:15:59.201 "num_base_bdevs_discovered": 4, 00:15:59.201 "num_base_bdevs_operational": 4, 00:15:59.201 "base_bdevs_list": [ 00:15:59.201 { 00:15:59.201 "name": "BaseBdev1", 00:15:59.201 "uuid": "a1862b58-dac7-59e9-b636-622f234273ba", 00:15:59.201 "is_configured": true, 00:15:59.201 "data_offset": 2048, 00:15:59.201 "data_size": 63488 00:15:59.201 }, 00:15:59.201 { 00:15:59.201 "name": "BaseBdev2", 00:15:59.201 "uuid": "353f6ab8-9121-5827-aa0f-fa2ec2eec596", 00:15:59.201 "is_configured": true, 00:15:59.201 "data_offset": 2048, 00:15:59.201 "data_size": 63488 00:15:59.201 }, 00:15:59.201 { 00:15:59.201 "name": "BaseBdev3", 00:15:59.201 "uuid": "1d3de877-b0fb-58b7-8c76-8eb2113fcf7d", 00:15:59.201 "is_configured": true, 00:15:59.201 "data_offset": 2048, 00:15:59.201 "data_size": 63488 00:15:59.201 }, 00:15:59.201 { 00:15:59.201 "name": "BaseBdev4", 00:15:59.201 "uuid": "5315d304-cb89-503a-8c1a-eb4fb13529fb", 00:15:59.201 "is_configured": true, 00:15:59.201 "data_offset": 2048, 00:15:59.201 "data_size": 63488 00:15:59.201 } 00:15:59.201 ] 00:15:59.201 }' 00:15:59.201 11:28:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.201 11:28:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.769 11:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:59.769 11:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:59.769 [2024-11-20 11:28:07.414974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.706 "name": "raid_bdev1", 00:16:00.706 "uuid": "1f5408a2-7f9b-4f8f-be3a-2977583b2c98", 00:16:00.706 "strip_size_kb": 0, 00:16:00.706 "state": "online", 00:16:00.706 "raid_level": "raid1", 00:16:00.706 "superblock": true, 00:16:00.706 "num_base_bdevs": 4, 00:16:00.706 "num_base_bdevs_discovered": 4, 00:16:00.706 "num_base_bdevs_operational": 4, 00:16:00.706 "base_bdevs_list": [ 00:16:00.706 { 00:16:00.706 "name": "BaseBdev1", 00:16:00.706 "uuid": "a1862b58-dac7-59e9-b636-622f234273ba", 00:16:00.706 "is_configured": true, 00:16:00.706 "data_offset": 2048, 00:16:00.706 "data_size": 63488 00:16:00.706 }, 00:16:00.706 { 00:16:00.706 "name": "BaseBdev2", 00:16:00.706 "uuid": "353f6ab8-9121-5827-aa0f-fa2ec2eec596", 00:16:00.706 "is_configured": true, 00:16:00.706 "data_offset": 2048, 00:16:00.706 "data_size": 63488 00:16:00.706 }, 00:16:00.706 { 00:16:00.706 "name": "BaseBdev3", 00:16:00.706 "uuid": "1d3de877-b0fb-58b7-8c76-8eb2113fcf7d", 00:16:00.706 "is_configured": true, 00:16:00.706 "data_offset": 2048, 00:16:00.706 "data_size": 63488 00:16:00.706 }, 00:16:00.706 { 00:16:00.706 "name": "BaseBdev4", 00:16:00.706 "uuid": "5315d304-cb89-503a-8c1a-eb4fb13529fb", 00:16:00.706 "is_configured": true, 00:16:00.706 "data_offset": 2048, 00:16:00.706 "data_size": 63488 00:16:00.706 } 00:16:00.706 ] 00:16:00.706 }' 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.706 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.965 [2024-11-20 11:28:08.794812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.965 [2024-11-20 11:28:08.794853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.965 [2024-11-20 11:28:08.798278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.965 [2024-11-20 11:28:08.798356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.965 [2024-11-20 11:28:08.798514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.965 [2024-11-20 11:28:08.798533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:00.965 { 00:16:00.965 "results": [ 00:16:00.965 { 00:16:00.965 "job": "raid_bdev1", 00:16:00.965 "core_mask": "0x1", 00:16:00.965 "workload": "randrw", 00:16:00.965 "percentage": 50, 00:16:00.965 "status": "finished", 00:16:00.965 "queue_depth": 1, 00:16:00.965 "io_size": 131072, 00:16:00.965 "runtime": 1.377289, 00:16:00.965 "iops": 7560.504730670179, 00:16:00.965 "mibps": 945.0630913337724, 00:16:00.965 "io_failed": 0, 00:16:00.965 "io_timeout": 0, 00:16:00.965 "avg_latency_us": 128.15011166112288, 00:16:00.965 "min_latency_us": 43.054545454545455, 00:16:00.965 "max_latency_us": 1809.6872727272728 00:16:00.965 } 00:16:00.965 ], 00:16:00.965 "core_count": 1 00:16:00.965 } 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75103 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75103 ']' 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75103 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.965 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75103 00:16:01.224 killing process with pid 75103 00:16:01.224 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.224 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.224 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75103' 00:16:01.224 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75103 00:16:01.224 [2024-11-20 11:28:08.828442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.224 11:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75103 00:16:01.483 [2024-11-20 11:28:09.123593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.H7zfU4dHA4 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:02.418 00:16:02.418 real 0m4.878s 00:16:02.418 user 0m6.013s 00:16:02.418 sys 0m0.604s 00:16:02.418 ************************************ 00:16:02.418 END TEST raid_read_error_test 00:16:02.418 ************************************ 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.418 11:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.418 11:28:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:02.418 11:28:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:02.418 11:28:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.418 11:28:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.678 ************************************ 00:16:02.678 START TEST raid_write_error_test 00:16:02.678 ************************************ 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M08uHKgHBX 00:16:02.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75249 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75249 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75249 ']' 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.678 11:28:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.678 [2024-11-20 11:28:10.391478] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:16:02.678 [2024-11-20 11:28:10.391745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75249 ] 00:16:02.937 [2024-11-20 11:28:10.576376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.937 [2024-11-20 11:28:10.730033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.197 [2024-11-20 11:28:10.931272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.197 [2024-11-20 11:28:10.931346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 BaseBdev1_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 true 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 [2024-11-20 11:28:11.468045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:03.765 [2024-11-20 11:28:11.468125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.765 [2024-11-20 11:28:11.468154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:03.765 [2024-11-20 11:28:11.468171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.765 [2024-11-20 11:28:11.471088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.765 [2024-11-20 11:28:11.471133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.765 BaseBdev1 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 BaseBdev2_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 true 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 [2024-11-20 11:28:11.528463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:03.765 [2024-11-20 11:28:11.528726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.765 [2024-11-20 11:28:11.528795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:03.765 [2024-11-20 11:28:11.528916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.765 [2024-11-20 11:28:11.531795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.765 [2024-11-20 11:28:11.531858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.765 BaseBdev2 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 BaseBdev3_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 true 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 [2024-11-20 11:28:11.600169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:03.765 [2024-11-20 11:28:11.600389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.765 [2024-11-20 11:28:11.600460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:03.765 [2024-11-20 11:28:11.600590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.765 [2024-11-20 11:28:11.603471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.765 BaseBdev3 00:16:03.765 [2024-11-20 11:28:11.603670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.765 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.026 BaseBdev4_malloc 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.026 true 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.026 [2024-11-20 11:28:11.660518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:04.026 [2024-11-20 11:28:11.660591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.026 [2024-11-20 11:28:11.660631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:04.026 [2024-11-20 11:28:11.660651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.026 [2024-11-20 11:28:11.663428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.026 [2024-11-20 11:28:11.663480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:04.026 BaseBdev4 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.026 [2024-11-20 11:28:11.668586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.026 [2024-11-20 11:28:11.671128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.026 [2024-11-20 11:28:11.671386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.026 [2024-11-20 11:28:11.671500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.026 [2024-11-20 11:28:11.671812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:04.026 [2024-11-20 11:28:11.671851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.026 [2024-11-20 11:28:11.672160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:04.026 [2024-11-20 11:28:11.672368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:04.026 [2024-11-20 11:28:11.672400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:04.026 [2024-11-20 11:28:11.672660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.026 "name": "raid_bdev1", 00:16:04.026 "uuid": "1a4d9445-f1ab-42be-b201-b4710edcc81a", 00:16:04.026 "strip_size_kb": 0, 00:16:04.026 "state": "online", 00:16:04.026 "raid_level": "raid1", 00:16:04.026 "superblock": true, 00:16:04.026 "num_base_bdevs": 4, 00:16:04.026 "num_base_bdevs_discovered": 4, 00:16:04.026 "num_base_bdevs_operational": 4, 00:16:04.026 "base_bdevs_list": [ 00:16:04.026 { 00:16:04.026 "name": "BaseBdev1", 00:16:04.026 "uuid": "dd3c8ba0-0161-5ee2-89f1-14cc383f805e", 00:16:04.026 "is_configured": true, 00:16:04.026 "data_offset": 2048, 00:16:04.026 "data_size": 63488 00:16:04.026 }, 00:16:04.026 { 00:16:04.026 "name": "BaseBdev2", 00:16:04.026 "uuid": "8f108f79-711e-5a14-b49e-05b686d404af", 00:16:04.026 "is_configured": true, 00:16:04.026 "data_offset": 2048, 00:16:04.026 "data_size": 63488 00:16:04.026 }, 00:16:04.026 { 00:16:04.026 "name": "BaseBdev3", 00:16:04.026 "uuid": "82f472c5-fd74-5a94-bf70-f83a7758a44f", 00:16:04.026 "is_configured": true, 00:16:04.026 "data_offset": 2048, 00:16:04.026 "data_size": 63488 00:16:04.026 }, 00:16:04.026 { 00:16:04.026 "name": "BaseBdev4", 00:16:04.026 "uuid": "8f91c50a-eb31-5914-8164-e64d4abcbedf", 00:16:04.026 "is_configured": true, 00:16:04.026 "data_offset": 2048, 00:16:04.026 "data_size": 63488 00:16:04.026 } 00:16:04.026 ] 00:16:04.026 }' 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.026 11:28:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.595 11:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:04.595 11:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:04.595 [2024-11-20 11:28:12.306412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.532 [2024-11-20 11:28:13.184366] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:05.532 [2024-11-20 11:28:13.184431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.532 [2024-11-20 11:28:13.184715] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.532 "name": "raid_bdev1", 00:16:05.532 "uuid": "1a4d9445-f1ab-42be-b201-b4710edcc81a", 00:16:05.532 "strip_size_kb": 0, 00:16:05.532 "state": "online", 00:16:05.532 "raid_level": "raid1", 00:16:05.532 "superblock": true, 00:16:05.532 "num_base_bdevs": 4, 00:16:05.532 "num_base_bdevs_discovered": 3, 00:16:05.532 "num_base_bdevs_operational": 3, 00:16:05.532 "base_bdevs_list": [ 00:16:05.532 { 00:16:05.532 "name": null, 00:16:05.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.532 "is_configured": false, 00:16:05.532 "data_offset": 0, 00:16:05.532 "data_size": 63488 00:16:05.532 }, 00:16:05.532 { 00:16:05.532 "name": "BaseBdev2", 00:16:05.532 "uuid": "8f108f79-711e-5a14-b49e-05b686d404af", 00:16:05.532 "is_configured": true, 00:16:05.532 "data_offset": 2048, 00:16:05.532 "data_size": 63488 00:16:05.532 }, 00:16:05.532 { 00:16:05.532 "name": "BaseBdev3", 00:16:05.532 "uuid": "82f472c5-fd74-5a94-bf70-f83a7758a44f", 00:16:05.532 "is_configured": true, 00:16:05.532 "data_offset": 2048, 00:16:05.532 "data_size": 63488 00:16:05.532 }, 00:16:05.532 { 00:16:05.532 "name": "BaseBdev4", 00:16:05.532 "uuid": "8f91c50a-eb31-5914-8164-e64d4abcbedf", 00:16:05.532 "is_configured": true, 00:16:05.532 "data_offset": 2048, 00:16:05.532 "data_size": 63488 00:16:05.532 } 00:16:05.532 ] 00:16:05.532 }' 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.532 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.118 [2024-11-20 11:28:13.740956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.118 [2024-11-20 11:28:13.741145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.118 [2024-11-20 11:28:13.744548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.118 { 00:16:06.118 "results": [ 00:16:06.118 { 00:16:06.118 "job": "raid_bdev1", 00:16:06.118 "core_mask": "0x1", 00:16:06.118 "workload": "randrw", 00:16:06.118 "percentage": 50, 00:16:06.118 "status": "finished", 00:16:06.118 "queue_depth": 1, 00:16:06.118 "io_size": 131072, 00:16:06.118 "runtime": 1.432254, 00:16:06.118 "iops": 8008.356059749179, 00:16:06.118 "mibps": 1001.0445074686473, 00:16:06.118 "io_failed": 0, 00:16:06.118 "io_timeout": 0, 00:16:06.118 "avg_latency_us": 120.50408179440439, 00:16:06.118 "min_latency_us": 40.49454545454545, 00:16:06.118 "max_latency_us": 1861.8181818181818 00:16:06.118 } 00:16:06.118 ], 00:16:06.118 "core_count": 1 00:16:06.118 } 00:16:06.118 [2024-11-20 11:28:13.744747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.118 [2024-11-20 11:28:13.744963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.118 [2024-11-20 11:28:13.744987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75249 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75249 ']' 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75249 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75249 00:16:06.118 killing process with pid 75249 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75249' 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75249 00:16:06.118 [2024-11-20 11:28:13.781601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.118 11:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75249 00:16:06.377 [2024-11-20 11:28:14.082983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M08uHKgHBX 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:07.754 00:16:07.754 real 0m4.938s 00:16:07.754 user 0m6.124s 00:16:07.754 sys 0m0.585s 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.754 ************************************ 00:16:07.754 END TEST raid_write_error_test 00:16:07.754 ************************************ 00:16:07.754 11:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 11:28:15 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:16:07.754 11:28:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:07.754 11:28:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:16:07.754 11:28:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:07.754 11:28:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.754 11:28:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 ************************************ 00:16:07.754 START TEST raid_rebuild_test 00:16:07.754 ************************************ 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:07.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75401 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75401 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75401 ']' 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.754 11:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.755 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:07.755 Zero copy mechanism will not be used. 00:16:07.755 [2024-11-20 11:28:15.360792] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:16:07.755 [2024-11-20 11:28:15.360975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75401 ] 00:16:07.755 [2024-11-20 11:28:15.540014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.013 [2024-11-20 11:28:15.710164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.272 [2024-11-20 11:28:15.949572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.272 [2024-11-20 11:28:15.949884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 BaseBdev1_malloc 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 [2024-11-20 11:28:16.525554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:08.996 [2024-11-20 11:28:16.525855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.996 [2024-11-20 11:28:16.525902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:08.996 [2024-11-20 11:28:16.525924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.996 [2024-11-20 11:28:16.528802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.996 [2024-11-20 11:28:16.528993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.996 BaseBdev1 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 BaseBdev2_malloc 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 [2024-11-20 11:28:16.575199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:08.996 [2024-11-20 11:28:16.575277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.996 [2024-11-20 11:28:16.575307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:08.996 [2024-11-20 11:28:16.575327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.996 [2024-11-20 11:28:16.578174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.996 [2024-11-20 11:28:16.578225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:08.996 BaseBdev2 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 spare_malloc 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 spare_delay 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 [2024-11-20 11:28:16.644079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:08.996 [2024-11-20 11:28:16.644157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.996 [2024-11-20 11:28:16.644191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:08.996 [2024-11-20 11:28:16.644210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.996 [2024-11-20 11:28:16.647065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.996 [2024-11-20 11:28:16.647118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:08.996 spare 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 [2024-11-20 11:28:16.652145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.996 [2024-11-20 11:28:16.654669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.996 [2024-11-20 11:28:16.654797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:08.996 [2024-11-20 11:28:16.654820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:08.996 [2024-11-20 11:28:16.655157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:08.996 [2024-11-20 11:28:16.655362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:08.996 [2024-11-20 11:28:16.655382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:08.996 [2024-11-20 11:28:16.655584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.996 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.996 "name": "raid_bdev1", 00:16:08.996 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:08.996 "strip_size_kb": 0, 00:16:08.996 "state": "online", 00:16:08.996 "raid_level": "raid1", 00:16:08.996 "superblock": false, 00:16:08.996 "num_base_bdevs": 2, 00:16:08.996 "num_base_bdevs_discovered": 2, 00:16:08.997 "num_base_bdevs_operational": 2, 00:16:08.997 "base_bdevs_list": [ 00:16:08.997 { 00:16:08.997 "name": "BaseBdev1", 00:16:08.997 "uuid": "b602f2a7-eb6f-53f2-a0aa-6dd9aaec6be6", 00:16:08.997 "is_configured": true, 00:16:08.997 "data_offset": 0, 00:16:08.997 "data_size": 65536 00:16:08.997 }, 00:16:08.997 { 00:16:08.997 "name": "BaseBdev2", 00:16:08.997 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:08.997 "is_configured": true, 00:16:08.997 "data_offset": 0, 00:16:08.997 "data_size": 65536 00:16:08.997 } 00:16:08.997 ] 00:16:08.997 }' 00:16:08.997 11:28:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.997 11:28:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.565 [2024-11-20 11:28:17.164640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.565 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:09.824 [2024-11-20 11:28:17.556447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:09.824 /dev/nbd0 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.824 1+0 records in 00:16:09.824 1+0 records out 00:16:09.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443258 s, 9.2 MB/s 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.824 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:09.825 11:28:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:09.825 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.825 11:28:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.825 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:09.825 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:09.825 11:28:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:16.382 65536+0 records in 00:16:16.382 65536+0 records out 00:16:16.382 33554432 bytes (34 MB, 32 MiB) copied, 6.48803 s, 5.2 MB/s 00:16:16.382 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:16.382 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.382 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:16.382 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.382 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:16.382 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.382 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.642 [2024-11-20 11:28:24.386202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.642 [2024-11-20 11:28:24.415583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.642 "name": "raid_bdev1", 00:16:16.642 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:16.642 "strip_size_kb": 0, 00:16:16.642 "state": "online", 00:16:16.642 "raid_level": "raid1", 00:16:16.642 "superblock": false, 00:16:16.642 "num_base_bdevs": 2, 00:16:16.642 "num_base_bdevs_discovered": 1, 00:16:16.642 "num_base_bdevs_operational": 1, 00:16:16.642 "base_bdevs_list": [ 00:16:16.642 { 00:16:16.642 "name": null, 00:16:16.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.642 "is_configured": false, 00:16:16.642 "data_offset": 0, 00:16:16.642 "data_size": 65536 00:16:16.642 }, 00:16:16.642 { 00:16:16.642 "name": "BaseBdev2", 00:16:16.642 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:16.642 "is_configured": true, 00:16:16.642 "data_offset": 0, 00:16:16.642 "data_size": 65536 00:16:16.642 } 00:16:16.642 ] 00:16:16.642 }' 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.642 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.209 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.209 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.209 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.209 [2024-11-20 11:28:24.903784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.209 [2024-11-20 11:28:24.920863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:16:17.209 11:28:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.209 11:28:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:17.209 [2024-11-20 11:28:24.923340] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.144 "name": "raid_bdev1", 00:16:18.144 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:18.144 "strip_size_kb": 0, 00:16:18.144 "state": "online", 00:16:18.144 "raid_level": "raid1", 00:16:18.144 "superblock": false, 00:16:18.144 "num_base_bdevs": 2, 00:16:18.144 "num_base_bdevs_discovered": 2, 00:16:18.144 "num_base_bdevs_operational": 2, 00:16:18.144 "process": { 00:16:18.144 "type": "rebuild", 00:16:18.144 "target": "spare", 00:16:18.144 "progress": { 00:16:18.144 "blocks": 20480, 00:16:18.144 "percent": 31 00:16:18.144 } 00:16:18.144 }, 00:16:18.144 "base_bdevs_list": [ 00:16:18.144 { 00:16:18.144 "name": "spare", 00:16:18.144 "uuid": "71787bd8-33db-5bbc-a5cd-bea0c15abed8", 00:16:18.144 "is_configured": true, 00:16:18.144 "data_offset": 0, 00:16:18.144 "data_size": 65536 00:16:18.144 }, 00:16:18.144 { 00:16:18.144 "name": "BaseBdev2", 00:16:18.144 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:18.144 "is_configured": true, 00:16:18.144 "data_offset": 0, 00:16:18.144 "data_size": 65536 00:16:18.144 } 00:16:18.144 ] 00:16:18.144 }' 00:16:18.144 11:28:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.402 [2024-11-20 11:28:26.081300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.402 [2024-11-20 11:28:26.132743] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.402 [2024-11-20 11:28:26.133064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.402 [2024-11-20 11:28:26.133258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.402 [2024-11-20 11:28:26.133336] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.402 "name": "raid_bdev1", 00:16:18.402 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:18.402 "strip_size_kb": 0, 00:16:18.402 "state": "online", 00:16:18.402 "raid_level": "raid1", 00:16:18.402 "superblock": false, 00:16:18.402 "num_base_bdevs": 2, 00:16:18.402 "num_base_bdevs_discovered": 1, 00:16:18.402 "num_base_bdevs_operational": 1, 00:16:18.402 "base_bdevs_list": [ 00:16:18.402 { 00:16:18.402 "name": null, 00:16:18.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.402 "is_configured": false, 00:16:18.402 "data_offset": 0, 00:16:18.402 "data_size": 65536 00:16:18.402 }, 00:16:18.402 { 00:16:18.402 "name": "BaseBdev2", 00:16:18.402 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:18.402 "is_configured": true, 00:16:18.402 "data_offset": 0, 00:16:18.402 "data_size": 65536 00:16:18.402 } 00:16:18.402 ] 00:16:18.402 }' 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.402 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.967 "name": "raid_bdev1", 00:16:18.967 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:18.967 "strip_size_kb": 0, 00:16:18.967 "state": "online", 00:16:18.967 "raid_level": "raid1", 00:16:18.967 "superblock": false, 00:16:18.967 "num_base_bdevs": 2, 00:16:18.967 "num_base_bdevs_discovered": 1, 00:16:18.967 "num_base_bdevs_operational": 1, 00:16:18.967 "base_bdevs_list": [ 00:16:18.967 { 00:16:18.967 "name": null, 00:16:18.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.967 "is_configured": false, 00:16:18.967 "data_offset": 0, 00:16:18.967 "data_size": 65536 00:16:18.967 }, 00:16:18.967 { 00:16:18.967 "name": "BaseBdev2", 00:16:18.967 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:18.967 "is_configured": true, 00:16:18.967 "data_offset": 0, 00:16:18.968 "data_size": 65536 00:16:18.968 } 00:16:18.968 ] 00:16:18.968 }' 00:16:18.968 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.968 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.968 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.226 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.226 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:19.226 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.226 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.226 [2024-11-20 11:28:26.832657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.226 [2024-11-20 11:28:26.849425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:16:19.226 11:28:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.226 11:28:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:19.226 [2024-11-20 11:28:26.853067] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.160 "name": "raid_bdev1", 00:16:20.160 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:20.160 "strip_size_kb": 0, 00:16:20.160 "state": "online", 00:16:20.160 "raid_level": "raid1", 00:16:20.160 "superblock": false, 00:16:20.160 "num_base_bdevs": 2, 00:16:20.160 "num_base_bdevs_discovered": 2, 00:16:20.160 "num_base_bdevs_operational": 2, 00:16:20.160 "process": { 00:16:20.160 "type": "rebuild", 00:16:20.160 "target": "spare", 00:16:20.160 "progress": { 00:16:20.160 "blocks": 20480, 00:16:20.160 "percent": 31 00:16:20.160 } 00:16:20.160 }, 00:16:20.160 "base_bdevs_list": [ 00:16:20.160 { 00:16:20.160 "name": "spare", 00:16:20.160 "uuid": "71787bd8-33db-5bbc-a5cd-bea0c15abed8", 00:16:20.160 "is_configured": true, 00:16:20.160 "data_offset": 0, 00:16:20.160 "data_size": 65536 00:16:20.160 }, 00:16:20.160 { 00:16:20.160 "name": "BaseBdev2", 00:16:20.160 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:20.160 "is_configured": true, 00:16:20.160 "data_offset": 0, 00:16:20.160 "data_size": 65536 00:16:20.160 } 00:16:20.160 ] 00:16:20.160 }' 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.160 11:28:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.420 "name": "raid_bdev1", 00:16:20.420 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:20.420 "strip_size_kb": 0, 00:16:20.420 "state": "online", 00:16:20.420 "raid_level": "raid1", 00:16:20.420 "superblock": false, 00:16:20.420 "num_base_bdevs": 2, 00:16:20.420 "num_base_bdevs_discovered": 2, 00:16:20.420 "num_base_bdevs_operational": 2, 00:16:20.420 "process": { 00:16:20.420 "type": "rebuild", 00:16:20.420 "target": "spare", 00:16:20.420 "progress": { 00:16:20.420 "blocks": 22528, 00:16:20.420 "percent": 34 00:16:20.420 } 00:16:20.420 }, 00:16:20.420 "base_bdevs_list": [ 00:16:20.420 { 00:16:20.420 "name": "spare", 00:16:20.420 "uuid": "71787bd8-33db-5bbc-a5cd-bea0c15abed8", 00:16:20.420 "is_configured": true, 00:16:20.420 "data_offset": 0, 00:16:20.420 "data_size": 65536 00:16:20.420 }, 00:16:20.420 { 00:16:20.420 "name": "BaseBdev2", 00:16:20.420 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:20.420 "is_configured": true, 00:16:20.420 "data_offset": 0, 00:16:20.420 "data_size": 65536 00:16:20.420 } 00:16:20.420 ] 00:16:20.420 }' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.420 11:28:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.355 11:28:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.613 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.613 "name": "raid_bdev1", 00:16:21.613 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:21.613 "strip_size_kb": 0, 00:16:21.613 "state": "online", 00:16:21.613 "raid_level": "raid1", 00:16:21.613 "superblock": false, 00:16:21.613 "num_base_bdevs": 2, 00:16:21.613 "num_base_bdevs_discovered": 2, 00:16:21.613 "num_base_bdevs_operational": 2, 00:16:21.613 "process": { 00:16:21.613 "type": "rebuild", 00:16:21.613 "target": "spare", 00:16:21.613 "progress": { 00:16:21.613 "blocks": 47104, 00:16:21.613 "percent": 71 00:16:21.613 } 00:16:21.613 }, 00:16:21.613 "base_bdevs_list": [ 00:16:21.613 { 00:16:21.613 "name": "spare", 00:16:21.613 "uuid": "71787bd8-33db-5bbc-a5cd-bea0c15abed8", 00:16:21.613 "is_configured": true, 00:16:21.613 "data_offset": 0, 00:16:21.613 "data_size": 65536 00:16:21.613 }, 00:16:21.613 { 00:16:21.613 "name": "BaseBdev2", 00:16:21.613 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:21.613 "is_configured": true, 00:16:21.613 "data_offset": 0, 00:16:21.613 "data_size": 65536 00:16:21.613 } 00:16:21.613 ] 00:16:21.613 }' 00:16:21.613 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.613 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.613 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.613 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.613 11:28:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.556 [2024-11-20 11:28:30.077422] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:22.556 [2024-11-20 11:28:30.077802] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:22.556 [2024-11-20 11:28:30.077890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.556 "name": "raid_bdev1", 00:16:22.556 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:22.556 "strip_size_kb": 0, 00:16:22.556 "state": "online", 00:16:22.556 "raid_level": "raid1", 00:16:22.556 "superblock": false, 00:16:22.557 "num_base_bdevs": 2, 00:16:22.557 "num_base_bdevs_discovered": 2, 00:16:22.557 "num_base_bdevs_operational": 2, 00:16:22.557 "base_bdevs_list": [ 00:16:22.557 { 00:16:22.557 "name": "spare", 00:16:22.557 "uuid": "71787bd8-33db-5bbc-a5cd-bea0c15abed8", 00:16:22.557 "is_configured": true, 00:16:22.557 "data_offset": 0, 00:16:22.557 "data_size": 65536 00:16:22.557 }, 00:16:22.557 { 00:16:22.557 "name": "BaseBdev2", 00:16:22.557 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:22.557 "is_configured": true, 00:16:22.557 "data_offset": 0, 00:16:22.557 "data_size": 65536 00:16:22.557 } 00:16:22.557 ] 00:16:22.557 }' 00:16:22.557 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.858 "name": "raid_bdev1", 00:16:22.858 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:22.858 "strip_size_kb": 0, 00:16:22.858 "state": "online", 00:16:22.858 "raid_level": "raid1", 00:16:22.858 "superblock": false, 00:16:22.858 "num_base_bdevs": 2, 00:16:22.858 "num_base_bdevs_discovered": 2, 00:16:22.858 "num_base_bdevs_operational": 2, 00:16:22.858 "base_bdevs_list": [ 00:16:22.858 { 00:16:22.858 "name": "spare", 00:16:22.858 "uuid": "71787bd8-33db-5bbc-a5cd-bea0c15abed8", 00:16:22.858 "is_configured": true, 00:16:22.858 "data_offset": 0, 00:16:22.858 "data_size": 65536 00:16:22.858 }, 00:16:22.858 { 00:16:22.858 "name": "BaseBdev2", 00:16:22.858 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:22.858 "is_configured": true, 00:16:22.858 "data_offset": 0, 00:16:22.858 "data_size": 65536 00:16:22.858 } 00:16:22.858 ] 00:16:22.858 }' 00:16:22.858 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.859 "name": "raid_bdev1", 00:16:22.859 "uuid": "15fe8efe-7d4b-43f1-8734-3e5ae8fb1c7e", 00:16:22.859 "strip_size_kb": 0, 00:16:22.859 "state": "online", 00:16:22.859 "raid_level": "raid1", 00:16:22.859 "superblock": false, 00:16:22.859 "num_base_bdevs": 2, 00:16:22.859 "num_base_bdevs_discovered": 2, 00:16:22.859 "num_base_bdevs_operational": 2, 00:16:22.859 "base_bdevs_list": [ 00:16:22.859 { 00:16:22.859 "name": "spare", 00:16:22.859 "uuid": "71787bd8-33db-5bbc-a5cd-bea0c15abed8", 00:16:22.859 "is_configured": true, 00:16:22.859 "data_offset": 0, 00:16:22.859 "data_size": 65536 00:16:22.859 }, 00:16:22.859 { 00:16:22.859 "name": "BaseBdev2", 00:16:22.859 "uuid": "29d38c72-3048-5e41-8db0-b01e07abfc67", 00:16:22.859 "is_configured": true, 00:16:22.859 "data_offset": 0, 00:16:22.859 "data_size": 65536 00:16:22.859 } 00:16:22.859 ] 00:16:22.859 }' 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.859 11:28:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 [2024-11-20 11:28:31.165683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.427 [2024-11-20 11:28:31.165866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.427 [2024-11-20 11:28:31.166015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.427 [2024-11-20 11:28:31.166112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.427 [2024-11-20 11:28:31.166130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.427 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.428 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:23.428 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.428 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.428 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:23.687 /dev/nbd0 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.688 1+0 records in 00:16:23.688 1+0 records out 00:16:23.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388196 s, 10.6 MB/s 00:16:23.688 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.994 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:23.994 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.994 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:23.994 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:23.994 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.994 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.994 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:24.253 /dev/nbd1 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.253 1+0 records in 00:16:24.253 1+0 records out 00:16:24.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348756 s, 11.7 MB/s 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.253 11:28:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:24.253 11:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:24.253 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.253 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.253 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.253 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:24.253 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.253 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.819 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75401 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75401 ']' 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75401 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75401 00:16:25.079 killing process with pid 75401 00:16:25.079 Received shutdown signal, test time was about 60.000000 seconds 00:16:25.079 00:16:25.079 Latency(us) 00:16:25.079 [2024-11-20T11:28:32.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.079 [2024-11-20T11:28:32.925Z] =================================================================================================================== 00:16:25.079 [2024-11-20T11:28:32.925Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75401' 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75401 00:16:25.079 [2024-11-20 11:28:32.718971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.079 11:28:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75401 00:16:25.337 [2024-11-20 11:28:32.993220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.274 ************************************ 00:16:26.274 END TEST raid_rebuild_test 00:16:26.274 ************************************ 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:26.274 00:16:26.274 real 0m18.771s 00:16:26.274 user 0m21.173s 00:16:26.274 sys 0m3.489s 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.274 11:28:34 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:16:26.274 11:28:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:26.274 11:28:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.274 11:28:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.274 ************************************ 00:16:26.274 START TEST raid_rebuild_test_sb 00:16:26.274 ************************************ 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75850 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75850 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75850 ']' 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:26.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.274 11:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.533 [2024-11-20 11:28:34.222853] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:16:26.533 [2024-11-20 11:28:34.223371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75850 ] 00:16:26.533 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:26.533 Zero copy mechanism will not be used. 00:16:26.791 [2024-11-20 11:28:34.422189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.791 [2024-11-20 11:28:34.583424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.049 [2024-11-20 11:28:34.790164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.049 [2024-11-20 11:28:34.790448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 BaseBdev1_malloc 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 [2024-11-20 11:28:35.279923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:27.617 [2024-11-20 11:28:35.280013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.617 [2024-11-20 11:28:35.280048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:27.617 [2024-11-20 11:28:35.280068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.617 [2024-11-20 11:28:35.282988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.617 [2024-11-20 11:28:35.283182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:27.617 BaseBdev1 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 BaseBdev2_malloc 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 [2024-11-20 11:28:35.328480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:27.617 [2024-11-20 11:28:35.328563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.617 [2024-11-20 11:28:35.328593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:27.617 [2024-11-20 11:28:35.328633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.617 [2024-11-20 11:28:35.331631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.617 [2024-11-20 11:28:35.331681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:27.617 BaseBdev2 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 spare_malloc 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 spare_delay 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 [2024-11-20 11:28:35.400411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.617 [2024-11-20 11:28:35.400503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.617 [2024-11-20 11:28:35.400547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:27.617 [2024-11-20 11:28:35.400566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.617 [2024-11-20 11:28:35.403812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.617 [2024-11-20 11:28:35.404038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.617 spare 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 [2024-11-20 11:28:35.408552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.617 [2024-11-20 11:28:35.411323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.617 [2024-11-20 11:28:35.411813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:27.617 [2024-11-20 11:28:35.411847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.617 [2024-11-20 11:28:35.412229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:27.617 [2024-11-20 11:28:35.412474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:27.617 [2024-11-20 11:28:35.412490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:27.617 [2024-11-20 11:28:35.412854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.617 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.876 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.876 "name": "raid_bdev1", 00:16:27.876 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:27.876 "strip_size_kb": 0, 00:16:27.876 "state": "online", 00:16:27.876 "raid_level": "raid1", 00:16:27.876 "superblock": true, 00:16:27.876 "num_base_bdevs": 2, 00:16:27.876 "num_base_bdevs_discovered": 2, 00:16:27.876 "num_base_bdevs_operational": 2, 00:16:27.876 "base_bdevs_list": [ 00:16:27.876 { 00:16:27.876 "name": "BaseBdev1", 00:16:27.876 "uuid": "97cedb69-e275-523c-9723-31eb35b2e851", 00:16:27.876 "is_configured": true, 00:16:27.876 "data_offset": 2048, 00:16:27.876 "data_size": 63488 00:16:27.876 }, 00:16:27.876 { 00:16:27.876 "name": "BaseBdev2", 00:16:27.876 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:27.876 "is_configured": true, 00:16:27.876 "data_offset": 2048, 00:16:27.876 "data_size": 63488 00:16:27.876 } 00:16:27.876 ] 00:16:27.876 }' 00:16:27.876 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.876 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:28.135 [2024-11-20 11:28:35.909275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:28.135 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.393 11:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:28.393 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:28.394 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:28.394 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:28.394 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:28.394 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.394 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:28.652 [2024-11-20 11:28:36.277054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:28.652 /dev/nbd0 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.652 1+0 records in 00:16:28.652 1+0 records out 00:16:28.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406174 s, 10.1 MB/s 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:28.652 11:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:35.213 63488+0 records in 00:16:35.213 63488+0 records out 00:16:35.213 32505856 bytes (33 MB, 31 MiB) copied, 6.0816 s, 5.3 MB/s 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.213 [2024-11-20 11:28:42.677549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.213 [2024-11-20 11:28:42.689656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.213 11:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.214 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.214 11:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.214 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.214 "name": "raid_bdev1", 00:16:35.214 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:35.214 "strip_size_kb": 0, 00:16:35.214 "state": "online", 00:16:35.214 "raid_level": "raid1", 00:16:35.214 "superblock": true, 00:16:35.214 "num_base_bdevs": 2, 00:16:35.214 "num_base_bdevs_discovered": 1, 00:16:35.214 "num_base_bdevs_operational": 1, 00:16:35.214 "base_bdevs_list": [ 00:16:35.214 { 00:16:35.214 "name": null, 00:16:35.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.214 "is_configured": false, 00:16:35.214 "data_offset": 0, 00:16:35.214 "data_size": 63488 00:16:35.214 }, 00:16:35.214 { 00:16:35.214 "name": "BaseBdev2", 00:16:35.214 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:35.214 "is_configured": true, 00:16:35.214 "data_offset": 2048, 00:16:35.214 "data_size": 63488 00:16:35.214 } 00:16:35.214 ] 00:16:35.214 }' 00:16:35.214 11:28:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.214 11:28:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.471 11:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.471 11:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.471 11:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.471 [2024-11-20 11:28:43.257858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.472 [2024-11-20 11:28:43.274760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:16:35.472 11:28:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.472 11:28:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:35.472 [2024-11-20 11:28:43.277339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.846 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.846 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.847 "name": "raid_bdev1", 00:16:36.847 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:36.847 "strip_size_kb": 0, 00:16:36.847 "state": "online", 00:16:36.847 "raid_level": "raid1", 00:16:36.847 "superblock": true, 00:16:36.847 "num_base_bdevs": 2, 00:16:36.847 "num_base_bdevs_discovered": 2, 00:16:36.847 "num_base_bdevs_operational": 2, 00:16:36.847 "process": { 00:16:36.847 "type": "rebuild", 00:16:36.847 "target": "spare", 00:16:36.847 "progress": { 00:16:36.847 "blocks": 20480, 00:16:36.847 "percent": 32 00:16:36.847 } 00:16:36.847 }, 00:16:36.847 "base_bdevs_list": [ 00:16:36.847 { 00:16:36.847 "name": "spare", 00:16:36.847 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:36.847 "is_configured": true, 00:16:36.847 "data_offset": 2048, 00:16:36.847 "data_size": 63488 00:16:36.847 }, 00:16:36.847 { 00:16:36.847 "name": "BaseBdev2", 00:16:36.847 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:36.847 "is_configured": true, 00:16:36.847 "data_offset": 2048, 00:16:36.847 "data_size": 63488 00:16:36.847 } 00:16:36.847 ] 00:16:36.847 }' 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 [2024-11-20 11:28:44.446844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.847 [2024-11-20 11:28:44.485998] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:36.847 [2024-11-20 11:28:44.486125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.847 [2024-11-20 11:28:44.486149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.847 [2024-11-20 11:28:44.486165] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.847 "name": "raid_bdev1", 00:16:36.847 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:36.847 "strip_size_kb": 0, 00:16:36.847 "state": "online", 00:16:36.847 "raid_level": "raid1", 00:16:36.847 "superblock": true, 00:16:36.847 "num_base_bdevs": 2, 00:16:36.847 "num_base_bdevs_discovered": 1, 00:16:36.847 "num_base_bdevs_operational": 1, 00:16:36.847 "base_bdevs_list": [ 00:16:36.847 { 00:16:36.847 "name": null, 00:16:36.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.847 "is_configured": false, 00:16:36.847 "data_offset": 0, 00:16:36.847 "data_size": 63488 00:16:36.847 }, 00:16:36.847 { 00:16:36.847 "name": "BaseBdev2", 00:16:36.847 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:36.847 "is_configured": true, 00:16:36.847 "data_offset": 2048, 00:16:36.847 "data_size": 63488 00:16:36.847 } 00:16:36.847 ] 00:16:36.847 }' 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.847 11:28:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.413 "name": "raid_bdev1", 00:16:37.413 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:37.413 "strip_size_kb": 0, 00:16:37.413 "state": "online", 00:16:37.413 "raid_level": "raid1", 00:16:37.413 "superblock": true, 00:16:37.413 "num_base_bdevs": 2, 00:16:37.413 "num_base_bdevs_discovered": 1, 00:16:37.413 "num_base_bdevs_operational": 1, 00:16:37.413 "base_bdevs_list": [ 00:16:37.413 { 00:16:37.413 "name": null, 00:16:37.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.413 "is_configured": false, 00:16:37.413 "data_offset": 0, 00:16:37.413 "data_size": 63488 00:16:37.413 }, 00:16:37.413 { 00:16:37.413 "name": "BaseBdev2", 00:16:37.413 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:37.413 "is_configured": true, 00:16:37.413 "data_offset": 2048, 00:16:37.413 "data_size": 63488 00:16:37.413 } 00:16:37.413 ] 00:16:37.413 }' 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.413 11:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.414 [2024-11-20 11:28:45.204760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.414 [2024-11-20 11:28:45.220554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:16:37.414 11:28:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.414 11:28:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:37.414 [2024-11-20 11:28:45.223088] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.818 "name": "raid_bdev1", 00:16:38.818 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:38.818 "strip_size_kb": 0, 00:16:38.818 "state": "online", 00:16:38.818 "raid_level": "raid1", 00:16:38.818 "superblock": true, 00:16:38.818 "num_base_bdevs": 2, 00:16:38.818 "num_base_bdevs_discovered": 2, 00:16:38.818 "num_base_bdevs_operational": 2, 00:16:38.818 "process": { 00:16:38.818 "type": "rebuild", 00:16:38.818 "target": "spare", 00:16:38.818 "progress": { 00:16:38.818 "blocks": 20480, 00:16:38.818 "percent": 32 00:16:38.818 } 00:16:38.818 }, 00:16:38.818 "base_bdevs_list": [ 00:16:38.818 { 00:16:38.818 "name": "spare", 00:16:38.818 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:38.818 "is_configured": true, 00:16:38.818 "data_offset": 2048, 00:16:38.818 "data_size": 63488 00:16:38.818 }, 00:16:38.818 { 00:16:38.818 "name": "BaseBdev2", 00:16:38.818 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:38.818 "is_configured": true, 00:16:38.818 "data_offset": 2048, 00:16:38.818 "data_size": 63488 00:16:38.818 } 00:16:38.818 ] 00:16:38.818 }' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:38.818 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.818 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.818 "name": "raid_bdev1", 00:16:38.818 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:38.818 "strip_size_kb": 0, 00:16:38.818 "state": "online", 00:16:38.818 "raid_level": "raid1", 00:16:38.818 "superblock": true, 00:16:38.818 "num_base_bdevs": 2, 00:16:38.818 "num_base_bdevs_discovered": 2, 00:16:38.818 "num_base_bdevs_operational": 2, 00:16:38.818 "process": { 00:16:38.818 "type": "rebuild", 00:16:38.818 "target": "spare", 00:16:38.818 "progress": { 00:16:38.818 "blocks": 22528, 00:16:38.818 "percent": 35 00:16:38.818 } 00:16:38.818 }, 00:16:38.818 "base_bdevs_list": [ 00:16:38.818 { 00:16:38.818 "name": "spare", 00:16:38.818 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:38.818 "is_configured": true, 00:16:38.818 "data_offset": 2048, 00:16:38.818 "data_size": 63488 00:16:38.818 }, 00:16:38.818 { 00:16:38.818 "name": "BaseBdev2", 00:16:38.818 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:38.818 "is_configured": true, 00:16:38.818 "data_offset": 2048, 00:16:38.819 "data_size": 63488 00:16:38.819 } 00:16:38.819 ] 00:16:38.819 }' 00:16:38.819 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.819 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.819 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.819 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.819 11:28:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.754 11:28:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.013 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.013 "name": "raid_bdev1", 00:16:40.013 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:40.013 "strip_size_kb": 0, 00:16:40.013 "state": "online", 00:16:40.013 "raid_level": "raid1", 00:16:40.013 "superblock": true, 00:16:40.013 "num_base_bdevs": 2, 00:16:40.013 "num_base_bdevs_discovered": 2, 00:16:40.013 "num_base_bdevs_operational": 2, 00:16:40.013 "process": { 00:16:40.013 "type": "rebuild", 00:16:40.013 "target": "spare", 00:16:40.013 "progress": { 00:16:40.013 "blocks": 47104, 00:16:40.013 "percent": 74 00:16:40.013 } 00:16:40.013 }, 00:16:40.013 "base_bdevs_list": [ 00:16:40.013 { 00:16:40.013 "name": "spare", 00:16:40.013 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:40.013 "is_configured": true, 00:16:40.013 "data_offset": 2048, 00:16:40.013 "data_size": 63488 00:16:40.013 }, 00:16:40.013 { 00:16:40.013 "name": "BaseBdev2", 00:16:40.013 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:40.013 "is_configured": true, 00:16:40.013 "data_offset": 2048, 00:16:40.013 "data_size": 63488 00:16:40.013 } 00:16:40.013 ] 00:16:40.013 }' 00:16:40.013 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.013 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.013 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.013 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.013 11:28:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.580 [2024-11-20 11:28:48.345854] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:40.580 [2024-11-20 11:28:48.345990] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:40.580 [2024-11-20 11:28:48.346195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.148 "name": "raid_bdev1", 00:16:41.148 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:41.148 "strip_size_kb": 0, 00:16:41.148 "state": "online", 00:16:41.148 "raid_level": "raid1", 00:16:41.148 "superblock": true, 00:16:41.148 "num_base_bdevs": 2, 00:16:41.148 "num_base_bdevs_discovered": 2, 00:16:41.148 "num_base_bdevs_operational": 2, 00:16:41.148 "base_bdevs_list": [ 00:16:41.148 { 00:16:41.148 "name": "spare", 00:16:41.148 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 2048, 00:16:41.148 "data_size": 63488 00:16:41.148 }, 00:16:41.148 { 00:16:41.148 "name": "BaseBdev2", 00:16:41.148 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 2048, 00:16:41.148 "data_size": 63488 00:16:41.148 } 00:16:41.148 ] 00:16:41.148 }' 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.148 "name": "raid_bdev1", 00:16:41.148 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:41.148 "strip_size_kb": 0, 00:16:41.148 "state": "online", 00:16:41.148 "raid_level": "raid1", 00:16:41.148 "superblock": true, 00:16:41.148 "num_base_bdevs": 2, 00:16:41.148 "num_base_bdevs_discovered": 2, 00:16:41.148 "num_base_bdevs_operational": 2, 00:16:41.148 "base_bdevs_list": [ 00:16:41.148 { 00:16:41.148 "name": "spare", 00:16:41.148 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 2048, 00:16:41.148 "data_size": 63488 00:16:41.148 }, 00:16:41.148 { 00:16:41.148 "name": "BaseBdev2", 00:16:41.148 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:41.148 "is_configured": true, 00:16:41.148 "data_offset": 2048, 00:16:41.148 "data_size": 63488 00:16:41.148 } 00:16:41.148 ] 00:16:41.148 }' 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.148 11:28:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.408 "name": "raid_bdev1", 00:16:41.408 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:41.408 "strip_size_kb": 0, 00:16:41.408 "state": "online", 00:16:41.408 "raid_level": "raid1", 00:16:41.408 "superblock": true, 00:16:41.408 "num_base_bdevs": 2, 00:16:41.408 "num_base_bdevs_discovered": 2, 00:16:41.408 "num_base_bdevs_operational": 2, 00:16:41.408 "base_bdevs_list": [ 00:16:41.408 { 00:16:41.408 "name": "spare", 00:16:41.408 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:41.408 "is_configured": true, 00:16:41.408 "data_offset": 2048, 00:16:41.408 "data_size": 63488 00:16:41.408 }, 00:16:41.408 { 00:16:41.408 "name": "BaseBdev2", 00:16:41.408 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:41.408 "is_configured": true, 00:16:41.408 "data_offset": 2048, 00:16:41.408 "data_size": 63488 00:16:41.408 } 00:16:41.408 ] 00:16:41.408 }' 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.408 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.975 [2024-11-20 11:28:49.556774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.975 [2024-11-20 11:28:49.556827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.975 [2024-11-20 11:28:49.556927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.975 [2024-11-20 11:28:49.557019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.975 [2024-11-20 11:28:49.557040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.975 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:42.234 /dev/nbd0 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.234 1+0 records in 00:16:42.234 1+0 records out 00:16:42.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300154 s, 13.6 MB/s 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.234 11:28:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:42.492 /dev/nbd1 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:42.492 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.492 1+0 records in 00:16:42.492 1+0 records out 00:16:42.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339835 s, 12.1 MB/s 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:42.493 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:42.751 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:42.751 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.751 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:42.751 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.751 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:42.751 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.751 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.010 11:28:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.581 [2024-11-20 11:28:51.208441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.581 [2024-11-20 11:28:51.208528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.581 [2024-11-20 11:28:51.208572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:43.581 [2024-11-20 11:28:51.208593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.581 [2024-11-20 11:28:51.211692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.581 [2024-11-20 11:28:51.211751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.581 [2024-11-20 11:28:51.211863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:43.581 [2024-11-20 11:28:51.211931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.581 [2024-11-20 11:28:51.212124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.581 spare 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.581 [2024-11-20 11:28:51.312246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:43.581 [2024-11-20 11:28:51.312341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:43.581 [2024-11-20 11:28:51.312756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:16:43.581 [2024-11-20 11:28:51.313010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:43.581 [2024-11-20 11:28:51.313031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:43.581 [2024-11-20 11:28:51.313270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.581 "name": "raid_bdev1", 00:16:43.581 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:43.581 "strip_size_kb": 0, 00:16:43.581 "state": "online", 00:16:43.581 "raid_level": "raid1", 00:16:43.581 "superblock": true, 00:16:43.581 "num_base_bdevs": 2, 00:16:43.581 "num_base_bdevs_discovered": 2, 00:16:43.581 "num_base_bdevs_operational": 2, 00:16:43.581 "base_bdevs_list": [ 00:16:43.581 { 00:16:43.581 "name": "spare", 00:16:43.581 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:43.581 "is_configured": true, 00:16:43.581 "data_offset": 2048, 00:16:43.581 "data_size": 63488 00:16:43.581 }, 00:16:43.581 { 00:16:43.581 "name": "BaseBdev2", 00:16:43.581 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:43.581 "is_configured": true, 00:16:43.581 "data_offset": 2048, 00:16:43.581 "data_size": 63488 00:16:43.581 } 00:16:43.581 ] 00:16:43.581 }' 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.581 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.149 "name": "raid_bdev1", 00:16:44.149 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:44.149 "strip_size_kb": 0, 00:16:44.149 "state": "online", 00:16:44.149 "raid_level": "raid1", 00:16:44.149 "superblock": true, 00:16:44.149 "num_base_bdevs": 2, 00:16:44.149 "num_base_bdevs_discovered": 2, 00:16:44.149 "num_base_bdevs_operational": 2, 00:16:44.149 "base_bdevs_list": [ 00:16:44.149 { 00:16:44.149 "name": "spare", 00:16:44.149 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:44.149 "is_configured": true, 00:16:44.149 "data_offset": 2048, 00:16:44.149 "data_size": 63488 00:16:44.149 }, 00:16:44.149 { 00:16:44.149 "name": "BaseBdev2", 00:16:44.149 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:44.149 "is_configured": true, 00:16:44.149 "data_offset": 2048, 00:16:44.149 "data_size": 63488 00:16:44.149 } 00:16:44.149 ] 00:16:44.149 }' 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:44.149 11:28:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.408 [2024-11-20 11:28:52.037558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.408 "name": "raid_bdev1", 00:16:44.408 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:44.408 "strip_size_kb": 0, 00:16:44.408 "state": "online", 00:16:44.408 "raid_level": "raid1", 00:16:44.408 "superblock": true, 00:16:44.408 "num_base_bdevs": 2, 00:16:44.408 "num_base_bdevs_discovered": 1, 00:16:44.408 "num_base_bdevs_operational": 1, 00:16:44.408 "base_bdevs_list": [ 00:16:44.408 { 00:16:44.408 "name": null, 00:16:44.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.408 "is_configured": false, 00:16:44.408 "data_offset": 0, 00:16:44.408 "data_size": 63488 00:16:44.408 }, 00:16:44.408 { 00:16:44.408 "name": "BaseBdev2", 00:16:44.408 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:44.408 "is_configured": true, 00:16:44.408 "data_offset": 2048, 00:16:44.408 "data_size": 63488 00:16:44.408 } 00:16:44.408 ] 00:16:44.408 }' 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.408 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.975 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:44.975 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.975 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.975 [2024-11-20 11:28:52.561786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.976 [2024-11-20 11:28:52.562197] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.976 [2024-11-20 11:28:52.562231] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:44.976 [2024-11-20 11:28:52.562278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.976 [2024-11-20 11:28:52.577906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:44.976 11:28:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.976 11:28:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:44.976 [2024-11-20 11:28:52.580394] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.912 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.912 "name": "raid_bdev1", 00:16:45.912 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:45.912 "strip_size_kb": 0, 00:16:45.912 "state": "online", 00:16:45.912 "raid_level": "raid1", 00:16:45.912 "superblock": true, 00:16:45.912 "num_base_bdevs": 2, 00:16:45.912 "num_base_bdevs_discovered": 2, 00:16:45.912 "num_base_bdevs_operational": 2, 00:16:45.912 "process": { 00:16:45.912 "type": "rebuild", 00:16:45.912 "target": "spare", 00:16:45.912 "progress": { 00:16:45.912 "blocks": 20480, 00:16:45.912 "percent": 32 00:16:45.912 } 00:16:45.912 }, 00:16:45.912 "base_bdevs_list": [ 00:16:45.912 { 00:16:45.912 "name": "spare", 00:16:45.912 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:45.912 "is_configured": true, 00:16:45.912 "data_offset": 2048, 00:16:45.912 "data_size": 63488 00:16:45.912 }, 00:16:45.912 { 00:16:45.912 "name": "BaseBdev2", 00:16:45.912 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:45.912 "is_configured": true, 00:16:45.912 "data_offset": 2048, 00:16:45.912 "data_size": 63488 00:16:45.912 } 00:16:45.912 ] 00:16:45.912 }' 00:16:45.913 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.913 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.913 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.913 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.913 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.913 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.913 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.171 [2024-11-20 11:28:53.758300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.171 [2024-11-20 11:28:53.789143] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.171 [2024-11-20 11:28:53.789221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.171 [2024-11-20 11:28:53.789245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.171 [2024-11-20 11:28:53.789267] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.171 "name": "raid_bdev1", 00:16:46.171 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:46.171 "strip_size_kb": 0, 00:16:46.171 "state": "online", 00:16:46.171 "raid_level": "raid1", 00:16:46.171 "superblock": true, 00:16:46.171 "num_base_bdevs": 2, 00:16:46.171 "num_base_bdevs_discovered": 1, 00:16:46.171 "num_base_bdevs_operational": 1, 00:16:46.171 "base_bdevs_list": [ 00:16:46.171 { 00:16:46.171 "name": null, 00:16:46.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.171 "is_configured": false, 00:16:46.171 "data_offset": 0, 00:16:46.171 "data_size": 63488 00:16:46.171 }, 00:16:46.171 { 00:16:46.171 "name": "BaseBdev2", 00:16:46.171 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:46.171 "is_configured": true, 00:16:46.171 "data_offset": 2048, 00:16:46.171 "data_size": 63488 00:16:46.171 } 00:16:46.171 ] 00:16:46.171 }' 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.171 11:28:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.737 11:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:46.737 11:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.737 11:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.737 [2024-11-20 11:28:54.354182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:46.737 [2024-11-20 11:28:54.354394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.737 [2024-11-20 11:28:54.354469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:46.737 [2024-11-20 11:28:54.354595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.737 [2024-11-20 11:28:54.355235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.737 [2024-11-20 11:28:54.355276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:46.737 [2024-11-20 11:28:54.355391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:46.737 [2024-11-20 11:28:54.355415] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.737 [2024-11-20 11:28:54.355428] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:46.737 [2024-11-20 11:28:54.355464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.737 [2024-11-20 11:28:54.371316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:46.737 spare 00:16:46.737 11:28:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.737 11:28:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:46.737 [2024-11-20 11:28:54.373979] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.673 "name": "raid_bdev1", 00:16:47.673 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:47.673 "strip_size_kb": 0, 00:16:47.673 "state": "online", 00:16:47.673 "raid_level": "raid1", 00:16:47.673 "superblock": true, 00:16:47.673 "num_base_bdevs": 2, 00:16:47.673 "num_base_bdevs_discovered": 2, 00:16:47.673 "num_base_bdevs_operational": 2, 00:16:47.673 "process": { 00:16:47.673 "type": "rebuild", 00:16:47.673 "target": "spare", 00:16:47.673 "progress": { 00:16:47.673 "blocks": 20480, 00:16:47.673 "percent": 32 00:16:47.673 } 00:16:47.673 }, 00:16:47.673 "base_bdevs_list": [ 00:16:47.673 { 00:16:47.673 "name": "spare", 00:16:47.673 "uuid": "6dc34fd7-1ef1-5979-bce3-5275f5396189", 00:16:47.673 "is_configured": true, 00:16:47.673 "data_offset": 2048, 00:16:47.673 "data_size": 63488 00:16:47.673 }, 00:16:47.673 { 00:16:47.673 "name": "BaseBdev2", 00:16:47.673 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:47.673 "is_configured": true, 00:16:47.673 "data_offset": 2048, 00:16:47.673 "data_size": 63488 00:16:47.673 } 00:16:47.673 ] 00:16:47.673 }' 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.673 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.931 [2024-11-20 11:28:55.535545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.931 [2024-11-20 11:28:55.583118] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.931 [2024-11-20 11:28:55.583599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.931 [2024-11-20 11:28:55.583940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.931 [2024-11-20 11:28:55.583992] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.931 "name": "raid_bdev1", 00:16:47.931 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:47.931 "strip_size_kb": 0, 00:16:47.931 "state": "online", 00:16:47.931 "raid_level": "raid1", 00:16:47.931 "superblock": true, 00:16:47.931 "num_base_bdevs": 2, 00:16:47.931 "num_base_bdevs_discovered": 1, 00:16:47.931 "num_base_bdevs_operational": 1, 00:16:47.931 "base_bdevs_list": [ 00:16:47.931 { 00:16:47.931 "name": null, 00:16:47.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.931 "is_configured": false, 00:16:47.931 "data_offset": 0, 00:16:47.931 "data_size": 63488 00:16:47.931 }, 00:16:47.931 { 00:16:47.931 "name": "BaseBdev2", 00:16:47.931 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:47.931 "is_configured": true, 00:16:47.931 "data_offset": 2048, 00:16:47.931 "data_size": 63488 00:16:47.931 } 00:16:47.931 ] 00:16:47.931 }' 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.931 11:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.497 "name": "raid_bdev1", 00:16:48.497 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:48.497 "strip_size_kb": 0, 00:16:48.497 "state": "online", 00:16:48.497 "raid_level": "raid1", 00:16:48.497 "superblock": true, 00:16:48.497 "num_base_bdevs": 2, 00:16:48.497 "num_base_bdevs_discovered": 1, 00:16:48.497 "num_base_bdevs_operational": 1, 00:16:48.497 "base_bdevs_list": [ 00:16:48.497 { 00:16:48.497 "name": null, 00:16:48.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.497 "is_configured": false, 00:16:48.497 "data_offset": 0, 00:16:48.497 "data_size": 63488 00:16:48.497 }, 00:16:48.497 { 00:16:48.497 "name": "BaseBdev2", 00:16:48.497 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:48.497 "is_configured": true, 00:16:48.497 "data_offset": 2048, 00:16:48.497 "data_size": 63488 00:16:48.497 } 00:16:48.497 ] 00:16:48.497 }' 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.497 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.497 [2024-11-20 11:28:56.318178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:48.497 [2024-11-20 11:28:56.318243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.497 [2024-11-20 11:28:56.318276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:48.497 [2024-11-20 11:28:56.318313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.497 [2024-11-20 11:28:56.318872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.497 [2024-11-20 11:28:56.318903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:48.497 [2024-11-20 11:28:56.319005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:48.497 [2024-11-20 11:28:56.319026] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.498 [2024-11-20 11:28:56.319040] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:48.498 [2024-11-20 11:28:56.319052] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:48.498 BaseBdev1 00:16:48.498 11:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.498 11:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.874 "name": "raid_bdev1", 00:16:49.874 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:49.874 "strip_size_kb": 0, 00:16:49.874 "state": "online", 00:16:49.874 "raid_level": "raid1", 00:16:49.874 "superblock": true, 00:16:49.874 "num_base_bdevs": 2, 00:16:49.874 "num_base_bdevs_discovered": 1, 00:16:49.874 "num_base_bdevs_operational": 1, 00:16:49.874 "base_bdevs_list": [ 00:16:49.874 { 00:16:49.874 "name": null, 00:16:49.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.874 "is_configured": false, 00:16:49.874 "data_offset": 0, 00:16:49.874 "data_size": 63488 00:16:49.874 }, 00:16:49.874 { 00:16:49.874 "name": "BaseBdev2", 00:16:49.874 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:49.874 "is_configured": true, 00:16:49.874 "data_offset": 2048, 00:16:49.874 "data_size": 63488 00:16:49.874 } 00:16:49.874 ] 00:16:49.874 }' 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.874 11:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.132 "name": "raid_bdev1", 00:16:50.132 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:50.132 "strip_size_kb": 0, 00:16:50.132 "state": "online", 00:16:50.132 "raid_level": "raid1", 00:16:50.132 "superblock": true, 00:16:50.132 "num_base_bdevs": 2, 00:16:50.132 "num_base_bdevs_discovered": 1, 00:16:50.132 "num_base_bdevs_operational": 1, 00:16:50.132 "base_bdevs_list": [ 00:16:50.132 { 00:16:50.132 "name": null, 00:16:50.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.132 "is_configured": false, 00:16:50.132 "data_offset": 0, 00:16:50.132 "data_size": 63488 00:16:50.132 }, 00:16:50.132 { 00:16:50.132 "name": "BaseBdev2", 00:16:50.132 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:50.132 "is_configured": true, 00:16:50.132 "data_offset": 2048, 00:16:50.132 "data_size": 63488 00:16:50.132 } 00:16:50.132 ] 00:16:50.132 }' 00:16:50.132 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.391 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.391 11:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.391 [2024-11-20 11:28:58.038868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.391 [2024-11-20 11:28:58.039070] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:50.391 [2024-11-20 11:28:58.039093] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:50.391 request: 00:16:50.391 { 00:16:50.391 "base_bdev": "BaseBdev1", 00:16:50.391 "raid_bdev": "raid_bdev1", 00:16:50.391 "method": "bdev_raid_add_base_bdev", 00:16:50.391 "req_id": 1 00:16:50.391 } 00:16:50.391 Got JSON-RPC error response 00:16:50.391 response: 00:16:50.391 { 00:16:50.391 "code": -22, 00:16:50.391 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:50.391 } 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:50.391 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:50.392 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:50.392 11:28:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:50.392 11:28:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.328 "name": "raid_bdev1", 00:16:51.328 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:51.328 "strip_size_kb": 0, 00:16:51.328 "state": "online", 00:16:51.328 "raid_level": "raid1", 00:16:51.328 "superblock": true, 00:16:51.328 "num_base_bdevs": 2, 00:16:51.328 "num_base_bdevs_discovered": 1, 00:16:51.328 "num_base_bdevs_operational": 1, 00:16:51.328 "base_bdevs_list": [ 00:16:51.328 { 00:16:51.328 "name": null, 00:16:51.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.328 "is_configured": false, 00:16:51.328 "data_offset": 0, 00:16:51.328 "data_size": 63488 00:16:51.328 }, 00:16:51.328 { 00:16:51.328 "name": "BaseBdev2", 00:16:51.328 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:51.328 "is_configured": true, 00:16:51.328 "data_offset": 2048, 00:16:51.328 "data_size": 63488 00:16:51.328 } 00:16:51.328 ] 00:16:51.328 }' 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.328 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.895 "name": "raid_bdev1", 00:16:51.895 "uuid": "3d9f75b2-fbaa-4595-9750-c403f6879a72", 00:16:51.895 "strip_size_kb": 0, 00:16:51.895 "state": "online", 00:16:51.895 "raid_level": "raid1", 00:16:51.895 "superblock": true, 00:16:51.895 "num_base_bdevs": 2, 00:16:51.895 "num_base_bdevs_discovered": 1, 00:16:51.895 "num_base_bdevs_operational": 1, 00:16:51.895 "base_bdevs_list": [ 00:16:51.895 { 00:16:51.895 "name": null, 00:16:51.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.895 "is_configured": false, 00:16:51.895 "data_offset": 0, 00:16:51.895 "data_size": 63488 00:16:51.895 }, 00:16:51.895 { 00:16:51.895 "name": "BaseBdev2", 00:16:51.895 "uuid": "49259b48-78f8-5128-a3ee-4a49dad6c9bf", 00:16:51.895 "is_configured": true, 00:16:51.895 "data_offset": 2048, 00:16:51.895 "data_size": 63488 00:16:51.895 } 00:16:51.895 ] 00:16:51.895 }' 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.895 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75850 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75850 ']' 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75850 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75850 00:16:52.170 killing process with pid 75850 00:16:52.170 Received shutdown signal, test time was about 60.000000 seconds 00:16:52.170 00:16:52.170 Latency(us) 00:16:52.170 [2024-11-20T11:29:00.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.170 [2024-11-20T11:29:00.016Z] =================================================================================================================== 00:16:52.170 [2024-11-20T11:29:00.016Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75850' 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75850 00:16:52.170 [2024-11-20 11:28:59.829377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.170 11:28:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75850 00:16:52.170 [2024-11-20 11:28:59.829550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.170 [2024-11-20 11:28:59.829634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.170 [2024-11-20 11:28:59.829656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:52.429 [2024-11-20 11:29:00.108293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:53.390 00:16:53.390 real 0m27.071s 00:16:53.390 user 0m33.835s 00:16:53.390 sys 0m4.089s 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.390 ************************************ 00:16:53.390 END TEST raid_rebuild_test_sb 00:16:53.390 ************************************ 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.390 11:29:01 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:53.390 11:29:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:53.390 11:29:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.390 11:29:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.390 ************************************ 00:16:53.390 START TEST raid_rebuild_test_io 00:16:53.390 ************************************ 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.390 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76620 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76620 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76620 ']' 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.391 11:29:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.649 [2024-11-20 11:29:01.330453] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:16:53.649 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:53.649 Zero copy mechanism will not be used. 00:16:53.649 [2024-11-20 11:29:01.330649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76620 ] 00:16:53.908 [2024-11-20 11:29:01.519963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.908 [2024-11-20 11:29:01.681513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.166 [2024-11-20 11:29:01.902009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.166 [2024-11-20 11:29:01.902076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 BaseBdev1_malloc 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 [2024-11-20 11:29:02.329801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:54.741 [2024-11-20 11:29:02.329894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.741 [2024-11-20 11:29:02.329928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:54.741 [2024-11-20 11:29:02.329947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.741 [2024-11-20 11:29:02.332906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.741 [2024-11-20 11:29:02.332958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:54.741 BaseBdev1 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 BaseBdev2_malloc 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.741 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.741 [2024-11-20 11:29:02.382866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:54.741 [2024-11-20 11:29:02.382941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.741 [2024-11-20 11:29:02.382970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:54.741 [2024-11-20 11:29:02.382998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.741 [2024-11-20 11:29:02.385946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.741 [2024-11-20 11:29:02.386014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:54.741 BaseBdev2 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.742 spare_malloc 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.742 spare_delay 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.742 [2024-11-20 11:29:02.455821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:54.742 [2024-11-20 11:29:02.455909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.742 [2024-11-20 11:29:02.455938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:54.742 [2024-11-20 11:29:02.455956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.742 [2024-11-20 11:29:02.458835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.742 [2024-11-20 11:29:02.458898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:54.742 spare 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.742 [2024-11-20 11:29:02.463930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.742 [2024-11-20 11:29:02.466520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.742 [2024-11-20 11:29:02.466664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:54.742 [2024-11-20 11:29:02.466688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:54.742 [2024-11-20 11:29:02.466999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:54.742 [2024-11-20 11:29:02.467224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:54.742 [2024-11-20 11:29:02.467252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:54.742 [2024-11-20 11:29:02.467462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.742 "name": "raid_bdev1", 00:16:54.742 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:16:54.742 "strip_size_kb": 0, 00:16:54.742 "state": "online", 00:16:54.742 "raid_level": "raid1", 00:16:54.742 "superblock": false, 00:16:54.742 "num_base_bdevs": 2, 00:16:54.742 "num_base_bdevs_discovered": 2, 00:16:54.742 "num_base_bdevs_operational": 2, 00:16:54.742 "base_bdevs_list": [ 00:16:54.742 { 00:16:54.742 "name": "BaseBdev1", 00:16:54.742 "uuid": "edf8599d-5597-5c2f-b540-94669d859e6c", 00:16:54.742 "is_configured": true, 00:16:54.742 "data_offset": 0, 00:16:54.742 "data_size": 65536 00:16:54.742 }, 00:16:54.742 { 00:16:54.742 "name": "BaseBdev2", 00:16:54.742 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:16:54.742 "is_configured": true, 00:16:54.742 "data_offset": 0, 00:16:54.742 "data_size": 65536 00:16:54.742 } 00:16:54.742 ] 00:16:54.742 }' 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.742 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.311 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:55.311 11:29:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:55.311 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.311 11:29:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.311 [2024-11-20 11:29:02.984533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.311 [2024-11-20 11:29:03.092107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.311 "name": "raid_bdev1", 00:16:55.311 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:16:55.311 "strip_size_kb": 0, 00:16:55.311 "state": "online", 00:16:55.311 "raid_level": "raid1", 00:16:55.311 "superblock": false, 00:16:55.311 "num_base_bdevs": 2, 00:16:55.311 "num_base_bdevs_discovered": 1, 00:16:55.311 "num_base_bdevs_operational": 1, 00:16:55.311 "base_bdevs_list": [ 00:16:55.311 { 00:16:55.311 "name": null, 00:16:55.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.311 "is_configured": false, 00:16:55.311 "data_offset": 0, 00:16:55.311 "data_size": 65536 00:16:55.311 }, 00:16:55.311 { 00:16:55.311 "name": "BaseBdev2", 00:16:55.311 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:16:55.311 "is_configured": true, 00:16:55.311 "data_offset": 0, 00:16:55.311 "data_size": 65536 00:16:55.311 } 00:16:55.311 ] 00:16:55.311 }' 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.311 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.571 [2024-11-20 11:29:03.224401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:55.571 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:55.571 Zero copy mechanism will not be used. 00:16:55.571 Running I/O for 60 seconds... 00:16:55.829 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:55.829 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.830 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.830 [2024-11-20 11:29:03.633110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.830 11:29:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.830 11:29:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:56.088 [2024-11-20 11:29:03.695684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:56.088 [2024-11-20 11:29:03.698205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:56.088 [2024-11-20 11:29:03.817344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:56.088 [2024-11-20 11:29:03.818055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:56.347 [2024-11-20 11:29:03.939192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:56.347 [2024-11-20 11:29:03.939514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:56.607 188.00 IOPS, 564.00 MiB/s [2024-11-20T11:29:04.453Z] [2024-11-20 11:29:04.315410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:56.607 [2024-11-20 11:29:04.448802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.868 11:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.126 "name": "raid_bdev1", 00:16:57.126 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:16:57.126 "strip_size_kb": 0, 00:16:57.126 "state": "online", 00:16:57.126 "raid_level": "raid1", 00:16:57.126 "superblock": false, 00:16:57.126 "num_base_bdevs": 2, 00:16:57.126 "num_base_bdevs_discovered": 2, 00:16:57.126 "num_base_bdevs_operational": 2, 00:16:57.126 "process": { 00:16:57.126 "type": "rebuild", 00:16:57.126 "target": "spare", 00:16:57.126 "progress": { 00:16:57.126 "blocks": 12288, 00:16:57.126 "percent": 18 00:16:57.126 } 00:16:57.126 }, 00:16:57.126 "base_bdevs_list": [ 00:16:57.126 { 00:16:57.126 "name": "spare", 00:16:57.126 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:16:57.126 "is_configured": true, 00:16:57.126 "data_offset": 0, 00:16:57.126 "data_size": 65536 00:16:57.126 }, 00:16:57.126 { 00:16:57.126 "name": "BaseBdev2", 00:16:57.126 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:16:57.126 "is_configured": true, 00:16:57.126 "data_offset": 0, 00:16:57.126 "data_size": 65536 00:16:57.126 } 00:16:57.126 ] 00:16:57.126 }' 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.126 [2024-11-20 11:29:04.772665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:57.126 [2024-11-20 11:29:04.773369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.126 11:29:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.126 [2024-11-20 11:29:04.843353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.386 [2024-11-20 11:29:05.008535] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.386 [2024-11-20 11:29:05.018596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.386 [2024-11-20 11:29:05.018667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.386 [2024-11-20 11:29:05.018687] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.386 [2024-11-20 11:29:05.061827] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.386 "name": "raid_bdev1", 00:16:57.386 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:16:57.386 "strip_size_kb": 0, 00:16:57.386 "state": "online", 00:16:57.386 "raid_level": "raid1", 00:16:57.386 "superblock": false, 00:16:57.386 "num_base_bdevs": 2, 00:16:57.386 "num_base_bdevs_discovered": 1, 00:16:57.386 "num_base_bdevs_operational": 1, 00:16:57.386 "base_bdevs_list": [ 00:16:57.386 { 00:16:57.386 "name": null, 00:16:57.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.386 "is_configured": false, 00:16:57.386 "data_offset": 0, 00:16:57.386 "data_size": 65536 00:16:57.386 }, 00:16:57.386 { 00:16:57.386 "name": "BaseBdev2", 00:16:57.386 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:16:57.386 "is_configured": true, 00:16:57.386 "data_offset": 0, 00:16:57.386 "data_size": 65536 00:16:57.386 } 00:16:57.386 ] 00:16:57.386 }' 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.386 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.903 133.00 IOPS, 399.00 MiB/s [2024-11-20T11:29:05.749Z] 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.903 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.903 "name": "raid_bdev1", 00:16:57.903 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:16:57.903 "strip_size_kb": 0, 00:16:57.903 "state": "online", 00:16:57.903 "raid_level": "raid1", 00:16:57.903 "superblock": false, 00:16:57.903 "num_base_bdevs": 2, 00:16:57.903 "num_base_bdevs_discovered": 1, 00:16:57.903 "num_base_bdevs_operational": 1, 00:16:57.903 "base_bdevs_list": [ 00:16:57.903 { 00:16:57.903 "name": null, 00:16:57.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.903 "is_configured": false, 00:16:57.903 "data_offset": 0, 00:16:57.903 "data_size": 65536 00:16:57.903 }, 00:16:57.903 { 00:16:57.904 "name": "BaseBdev2", 00:16:57.904 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:16:57.904 "is_configured": true, 00:16:57.904 "data_offset": 0, 00:16:57.904 "data_size": 65536 00:16:57.904 } 00:16:57.904 ] 00:16:57.904 }' 00:16:57.904 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.904 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.904 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.162 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.162 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.162 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.162 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.162 [2024-11-20 11:29:05.793359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.162 11:29:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.162 11:29:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:58.162 [2024-11-20 11:29:05.889450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:58.162 [2024-11-20 11:29:05.892269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.162 [2024-11-20 11:29:06.001819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:58.162 [2024-11-20 11:29:06.002607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:58.421 [2024-11-20 11:29:06.207476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:58.421 [2024-11-20 11:29:06.207891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:58.679 150.00 IOPS, 450.00 MiB/s [2024-11-20T11:29:06.525Z] [2024-11-20 11:29:06.432347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:58.679 [2024-11-20 11:29:06.433002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:58.938 [2024-11-20 11:29:06.580795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.197 "name": "raid_bdev1", 00:16:59.197 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:16:59.197 "strip_size_kb": 0, 00:16:59.197 "state": "online", 00:16:59.197 "raid_level": "raid1", 00:16:59.197 "superblock": false, 00:16:59.197 "num_base_bdevs": 2, 00:16:59.197 "num_base_bdevs_discovered": 2, 00:16:59.197 "num_base_bdevs_operational": 2, 00:16:59.197 "process": { 00:16:59.197 "type": "rebuild", 00:16:59.197 "target": "spare", 00:16:59.197 "progress": { 00:16:59.197 "blocks": 12288, 00:16:59.197 "percent": 18 00:16:59.197 } 00:16:59.197 }, 00:16:59.197 "base_bdevs_list": [ 00:16:59.197 { 00:16:59.197 "name": "spare", 00:16:59.197 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:16:59.197 "is_configured": true, 00:16:59.197 "data_offset": 0, 00:16:59.197 "data_size": 65536 00:16:59.197 }, 00:16:59.197 { 00:16:59.197 "name": "BaseBdev2", 00:16:59.197 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:16:59.197 "is_configured": true, 00:16:59.197 "data_offset": 0, 00:16:59.197 "data_size": 65536 00:16:59.197 } 00:16:59.197 ] 00:16:59.197 }' 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.197 [2024-11-20 11:29:06.916135] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.197 11:29:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=437 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.197 11:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.457 11:29:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.457 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.457 "name": "raid_bdev1", 00:16:59.457 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:16:59.457 "strip_size_kb": 0, 00:16:59.457 "state": "online", 00:16:59.457 "raid_level": "raid1", 00:16:59.457 "superblock": false, 00:16:59.457 "num_base_bdevs": 2, 00:16:59.457 "num_base_bdevs_discovered": 2, 00:16:59.457 "num_base_bdevs_operational": 2, 00:16:59.457 "process": { 00:16:59.457 "type": "rebuild", 00:16:59.457 "target": "spare", 00:16:59.457 "progress": { 00:16:59.457 "blocks": 14336, 00:16:59.457 "percent": 21 00:16:59.457 } 00:16:59.457 }, 00:16:59.457 "base_bdevs_list": [ 00:16:59.457 { 00:16:59.457 "name": "spare", 00:16:59.457 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:16:59.457 "is_configured": true, 00:16:59.457 "data_offset": 0, 00:16:59.457 "data_size": 65536 00:16:59.457 }, 00:16:59.457 { 00:16:59.457 "name": "BaseBdev2", 00:16:59.457 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:16:59.457 "is_configured": true, 00:16:59.457 "data_offset": 0, 00:16:59.457 "data_size": 65536 00:16:59.457 } 00:16:59.457 ] 00:16:59.457 }' 00:16:59.457 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.457 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.457 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.457 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.457 11:29:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.391 127.50 IOPS, 382.50 MiB/s [2024-11-20T11:29:08.237Z] 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.391 11:29:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.650 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.650 "name": "raid_bdev1", 00:17:00.650 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:17:00.650 "strip_size_kb": 0, 00:17:00.650 "state": "online", 00:17:00.650 "raid_level": "raid1", 00:17:00.650 "superblock": false, 00:17:00.650 "num_base_bdevs": 2, 00:17:00.650 "num_base_bdevs_discovered": 2, 00:17:00.650 "num_base_bdevs_operational": 2, 00:17:00.650 "process": { 00:17:00.650 "type": "rebuild", 00:17:00.650 "target": "spare", 00:17:00.650 "progress": { 00:17:00.650 "blocks": 34816, 00:17:00.650 "percent": 53 00:17:00.650 } 00:17:00.650 }, 00:17:00.650 "base_bdevs_list": [ 00:17:00.650 { 00:17:00.650 "name": "spare", 00:17:00.650 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:17:00.650 "is_configured": true, 00:17:00.650 "data_offset": 0, 00:17:00.650 "data_size": 65536 00:17:00.650 }, 00:17:00.650 { 00:17:00.650 "name": "BaseBdev2", 00:17:00.650 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:17:00.650 "is_configured": true, 00:17:00.650 "data_offset": 0, 00:17:00.650 "data_size": 65536 00:17:00.650 } 00:17:00.650 ] 00:17:00.650 }' 00:17:00.650 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.650 117.60 IOPS, 352.80 MiB/s [2024-11-20T11:29:08.496Z] 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.650 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.650 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.650 11:29:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.650 [2024-11-20 11:29:08.371130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:00.650 [2024-11-20 11:29:08.371680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:00.973 [2024-11-20 11:29:08.574997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:00.973 [2024-11-20 11:29:08.575394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:01.285 [2024-11-20 11:29:08.891154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:01.285 [2024-11-20 11:29:09.019266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:01.546 104.50 IOPS, 313.50 MiB/s [2024-11-20T11:29:09.392Z] [2024-11-20 11:29:09.266643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.546 11:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:01.805 11:29:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.805 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.805 "name": "raid_bdev1", 00:17:01.805 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:17:01.805 "strip_size_kb": 0, 00:17:01.805 "state": "online", 00:17:01.805 "raid_level": "raid1", 00:17:01.805 "superblock": false, 00:17:01.805 "num_base_bdevs": 2, 00:17:01.805 "num_base_bdevs_discovered": 2, 00:17:01.805 "num_base_bdevs_operational": 2, 00:17:01.805 "process": { 00:17:01.805 "type": "rebuild", 00:17:01.805 "target": "spare", 00:17:01.805 "progress": { 00:17:01.805 "blocks": 51200, 00:17:01.805 "percent": 78 00:17:01.805 } 00:17:01.805 }, 00:17:01.805 "base_bdevs_list": [ 00:17:01.805 { 00:17:01.805 "name": "spare", 00:17:01.805 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:17:01.805 "is_configured": true, 00:17:01.805 "data_offset": 0, 00:17:01.805 "data_size": 65536 00:17:01.805 }, 00:17:01.805 { 00:17:01.805 "name": "BaseBdev2", 00:17:01.805 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:17:01.805 "is_configured": true, 00:17:01.805 "data_offset": 0, 00:17:01.805 "data_size": 65536 00:17:01.805 } 00:17:01.805 ] 00:17:01.805 }' 00:17:01.805 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.805 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.805 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.805 [2024-11-20 11:29:09.486869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:01.805 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.805 11:29:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.373 [2024-11-20 11:29:10.172958] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:02.631 95.43 IOPS, 286.29 MiB/s [2024-11-20T11:29:10.477Z] [2024-11-20 11:29:10.273023] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:02.631 [2024-11-20 11:29:10.275861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.890 "name": "raid_bdev1", 00:17:02.890 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:17:02.890 "strip_size_kb": 0, 00:17:02.890 "state": "online", 00:17:02.890 "raid_level": "raid1", 00:17:02.890 "superblock": false, 00:17:02.890 "num_base_bdevs": 2, 00:17:02.890 "num_base_bdevs_discovered": 2, 00:17:02.890 "num_base_bdevs_operational": 2, 00:17:02.890 "base_bdevs_list": [ 00:17:02.890 { 00:17:02.890 "name": "spare", 00:17:02.890 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:17:02.890 "is_configured": true, 00:17:02.890 "data_offset": 0, 00:17:02.890 "data_size": 65536 00:17:02.890 }, 00:17:02.890 { 00:17:02.890 "name": "BaseBdev2", 00:17:02.890 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:17:02.890 "is_configured": true, 00:17:02.890 "data_offset": 0, 00:17:02.890 "data_size": 65536 00:17:02.890 } 00:17:02.890 ] 00:17:02.890 }' 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:02.890 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.149 "name": "raid_bdev1", 00:17:03.149 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:17:03.149 "strip_size_kb": 0, 00:17:03.149 "state": "online", 00:17:03.149 "raid_level": "raid1", 00:17:03.149 "superblock": false, 00:17:03.149 "num_base_bdevs": 2, 00:17:03.149 "num_base_bdevs_discovered": 2, 00:17:03.149 "num_base_bdevs_operational": 2, 00:17:03.149 "base_bdevs_list": [ 00:17:03.149 { 00:17:03.149 "name": "spare", 00:17:03.149 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:17:03.149 "is_configured": true, 00:17:03.149 "data_offset": 0, 00:17:03.149 "data_size": 65536 00:17:03.149 }, 00:17:03.149 { 00:17:03.149 "name": "BaseBdev2", 00:17:03.149 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:17:03.149 "is_configured": true, 00:17:03.149 "data_offset": 0, 00:17:03.149 "data_size": 65536 00:17:03.149 } 00:17:03.149 ] 00:17:03.149 }' 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.149 "name": "raid_bdev1", 00:17:03.149 "uuid": "9deb7d3a-0be5-480f-8abd-bef91ac73467", 00:17:03.149 "strip_size_kb": 0, 00:17:03.149 "state": "online", 00:17:03.149 "raid_level": "raid1", 00:17:03.149 "superblock": false, 00:17:03.149 "num_base_bdevs": 2, 00:17:03.149 "num_base_bdevs_discovered": 2, 00:17:03.149 "num_base_bdevs_operational": 2, 00:17:03.149 "base_bdevs_list": [ 00:17:03.149 { 00:17:03.149 "name": "spare", 00:17:03.149 "uuid": "5b3b9694-b9e5-5a8b-8711-fbb5746fa790", 00:17:03.149 "is_configured": true, 00:17:03.149 "data_offset": 0, 00:17:03.149 "data_size": 65536 00:17:03.149 }, 00:17:03.149 { 00:17:03.149 "name": "BaseBdev2", 00:17:03.149 "uuid": "d1062758-860c-5a5f-9ae7-9b08e55bdea8", 00:17:03.149 "is_configured": true, 00:17:03.149 "data_offset": 0, 00:17:03.149 "data_size": 65536 00:17:03.149 } 00:17:03.149 ] 00:17:03.149 }' 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.149 11:29:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.667 88.62 IOPS, 265.88 MiB/s [2024-11-20T11:29:11.513Z] 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:03.667 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.667 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.667 [2024-11-20 11:29:11.386868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.667 [2024-11-20 11:29:11.386905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.667 00:17:03.667 Latency(us) 00:17:03.667 [2024-11-20T11:29:11.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.667 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:03.667 raid_bdev1 : 8.25 87.50 262.50 0.00 0.00 15727.47 284.86 112006.98 00:17:03.667 [2024-11-20T11:29:11.513Z] =================================================================================================================== 00:17:03.667 [2024-11-20T11:29:11.513Z] Total : 87.50 262.50 0.00 0.00 15727.47 284.86 112006.98 00:17:03.667 [2024-11-20 11:29:11.499559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.667 [2024-11-20 11:29:11.499657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.667 [2024-11-20 11:29:11.499769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.667 [2024-11-20 11:29:11.499790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:03.667 { 00:17:03.667 "results": [ 00:17:03.667 { 00:17:03.667 "job": "raid_bdev1", 00:17:03.667 "core_mask": "0x1", 00:17:03.667 "workload": "randrw", 00:17:03.667 "percentage": 50, 00:17:03.667 "status": "finished", 00:17:03.667 "queue_depth": 2, 00:17:03.667 "io_size": 3145728, 00:17:03.667 "runtime": 8.251548, 00:17:03.667 "iops": 87.49873357096148, 00:17:03.667 "mibps": 262.49620071288444, 00:17:03.667 "io_failed": 0, 00:17:03.667 "io_timeout": 0, 00:17:03.667 "avg_latency_us": 15727.46798287585, 00:17:03.667 "min_latency_us": 284.85818181818183, 00:17:03.667 "max_latency_us": 112006.98181818181 00:17:03.667 } 00:17:03.667 ], 00:17:03.667 "core_count": 1 00:17:03.667 } 00:17:03.667 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.667 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.667 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:03.667 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.667 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:03.926 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:04.185 /dev/nbd0 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.185 1+0 records in 00:17:04.185 1+0 records out 00:17:04.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455911 s, 9.0 MB/s 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:04.185 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.186 11:29:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:04.445 /dev/nbd1 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.445 1+0 records in 00:17:04.445 1+0 records out 00:17:04.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301728 s, 13.6 MB/s 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:04.445 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:04.705 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:04.705 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.705 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:04.705 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.705 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:04.705 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.705 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.964 11:29:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.224 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76620 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76620 ']' 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76620 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76620 00:17:05.484 killing process with pid 76620 00:17:05.484 Received shutdown signal, test time was about 9.881325 seconds 00:17:05.484 00:17:05.484 Latency(us) 00:17:05.484 [2024-11-20T11:29:13.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.484 [2024-11-20T11:29:13.330Z] =================================================================================================================== 00:17:05.484 [2024-11-20T11:29:13.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76620' 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76620 00:17:05.484 [2024-11-20 11:29:13.108583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.484 11:29:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76620 00:17:05.484 [2024-11-20 11:29:13.324250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.862 ************************************ 00:17:06.862 END TEST raid_rebuild_test_io 00:17:06.862 ************************************ 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:06.862 00:17:06.862 real 0m13.231s 00:17:06.862 user 0m17.411s 00:17:06.862 sys 0m1.439s 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.862 11:29:14 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:17:06.862 11:29:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:06.862 11:29:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.862 11:29:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.862 ************************************ 00:17:06.862 START TEST raid_rebuild_test_sb_io 00:17:06.862 ************************************ 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:06.862 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77003 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77003 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77003 ']' 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.863 11:29:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.863 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.863 Zero copy mechanism will not be used. 00:17:06.863 [2024-11-20 11:29:14.620082] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:17:06.863 [2024-11-20 11:29:14.620261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77003 ] 00:17:07.122 [2024-11-20 11:29:14.810149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.122 [2024-11-20 11:29:14.963340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.381 [2024-11-20 11:29:15.183241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.381 [2024-11-20 11:29:15.183529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.948 BaseBdev1_malloc 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.948 [2024-11-20 11:29:15.676138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:07.948 [2024-11-20 11:29:15.676225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.948 [2024-11-20 11:29:15.676274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:07.948 [2024-11-20 11:29:15.676292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.948 [2024-11-20 11:29:15.679197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.948 [2024-11-20 11:29:15.679381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.948 BaseBdev1 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.948 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.948 BaseBdev2_malloc 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.949 [2024-11-20 11:29:15.734270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:07.949 [2024-11-20 11:29:15.734348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.949 [2024-11-20 11:29:15.734377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:07.949 [2024-11-20 11:29:15.734397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.949 [2024-11-20 11:29:15.737185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.949 [2024-11-20 11:29:15.737233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:07.949 BaseBdev2 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.949 spare_malloc 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.949 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.207 spare_delay 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.207 [2024-11-20 11:29:15.805666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.207 [2024-11-20 11:29:15.805752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.207 [2024-11-20 11:29:15.805782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:08.207 [2024-11-20 11:29:15.805799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.207 [2024-11-20 11:29:15.808830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.207 [2024-11-20 11:29:15.808881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.207 spare 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.207 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.207 [2024-11-20 11:29:15.813841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.207 [2024-11-20 11:29:15.816373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.207 [2024-11-20 11:29:15.816609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:08.207 [2024-11-20 11:29:15.816658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:08.207 [2024-11-20 11:29:15.816970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:08.207 [2024-11-20 11:29:15.817199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:08.207 [2024-11-20 11:29:15.817215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:08.208 [2024-11-20 11:29:15.817400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.208 "name": "raid_bdev1", 00:17:08.208 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:08.208 "strip_size_kb": 0, 00:17:08.208 "state": "online", 00:17:08.208 "raid_level": "raid1", 00:17:08.208 "superblock": true, 00:17:08.208 "num_base_bdevs": 2, 00:17:08.208 "num_base_bdevs_discovered": 2, 00:17:08.208 "num_base_bdevs_operational": 2, 00:17:08.208 "base_bdevs_list": [ 00:17:08.208 { 00:17:08.208 "name": "BaseBdev1", 00:17:08.208 "uuid": "23d40704-df96-5993-9053-f6e11d551774", 00:17:08.208 "is_configured": true, 00:17:08.208 "data_offset": 2048, 00:17:08.208 "data_size": 63488 00:17:08.208 }, 00:17:08.208 { 00:17:08.208 "name": "BaseBdev2", 00:17:08.208 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:08.208 "is_configured": true, 00:17:08.208 "data_offset": 2048, 00:17:08.208 "data_size": 63488 00:17:08.208 } 00:17:08.208 ] 00:17:08.208 }' 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.208 11:29:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.466 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:08.466 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.466 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.466 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.466 [2024-11-20 11:29:16.294353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.466 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.725 [2024-11-20 11:29:16.397977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.725 "name": "raid_bdev1", 00:17:08.725 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:08.725 "strip_size_kb": 0, 00:17:08.725 "state": "online", 00:17:08.725 "raid_level": "raid1", 00:17:08.725 "superblock": true, 00:17:08.725 "num_base_bdevs": 2, 00:17:08.725 "num_base_bdevs_discovered": 1, 00:17:08.725 "num_base_bdevs_operational": 1, 00:17:08.725 "base_bdevs_list": [ 00:17:08.725 { 00:17:08.725 "name": null, 00:17:08.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.725 "is_configured": false, 00:17:08.725 "data_offset": 0, 00:17:08.725 "data_size": 63488 00:17:08.725 }, 00:17:08.725 { 00:17:08.725 "name": "BaseBdev2", 00:17:08.725 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:08.725 "is_configured": true, 00:17:08.725 "data_offset": 2048, 00:17:08.725 "data_size": 63488 00:17:08.725 } 00:17:08.725 ] 00:17:08.725 }' 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.725 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.984 [2024-11-20 11:29:16.578385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:08.984 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:08.984 Zero copy mechanism will not be used. 00:17:08.984 Running I/O for 60 seconds... 00:17:09.242 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.242 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.242 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.242 [2024-11-20 11:29:16.925726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.242 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.242 11:29:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:09.242 [2024-11-20 11:29:16.989609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:09.242 [2024-11-20 11:29:16.992621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.500 [2024-11-20 11:29:17.093958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:09.500 [2024-11-20 11:29:17.094670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:09.500 [2024-11-20 11:29:17.231976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:09.500 [2024-11-20 11:29:17.232485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:10.067 140.00 IOPS, 420.00 MiB/s [2024-11-20T11:29:17.913Z] [2024-11-20 11:29:17.723387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:10.067 [2024-11-20 11:29:17.724119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:10.325 [2024-11-20 11:29:17.967309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.325 11:29:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.325 "name": "raid_bdev1", 00:17:10.325 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:10.325 "strip_size_kb": 0, 00:17:10.325 "state": "online", 00:17:10.325 "raid_level": "raid1", 00:17:10.325 "superblock": true, 00:17:10.325 "num_base_bdevs": 2, 00:17:10.325 "num_base_bdevs_discovered": 2, 00:17:10.325 "num_base_bdevs_operational": 2, 00:17:10.325 "process": { 00:17:10.325 "type": "rebuild", 00:17:10.325 "target": "spare", 00:17:10.325 "progress": { 00:17:10.325 "blocks": 14336, 00:17:10.325 "percent": 22 00:17:10.325 } 00:17:10.325 }, 00:17:10.325 "base_bdevs_list": [ 00:17:10.325 { 00:17:10.325 "name": "spare", 00:17:10.325 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:10.325 "is_configured": true, 00:17:10.325 "data_offset": 2048, 00:17:10.325 "data_size": 63488 00:17:10.325 }, 00:17:10.325 { 00:17:10.325 "name": "BaseBdev2", 00:17:10.325 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:10.325 "is_configured": true, 00:17:10.325 "data_offset": 2048, 00:17:10.325 "data_size": 63488 00:17:10.325 } 00:17:10.325 ] 00:17:10.325 }' 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.325 [2024-11-20 11:29:18.078176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:10.325 [2024-11-20 11:29:18.078729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.325 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.325 [2024-11-20 11:29:18.139697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.583 [2024-11-20 11:29:18.315681] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.583 [2024-11-20 11:29:18.334250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.583 [2024-11-20 11:29:18.334303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.583 [2024-11-20 11:29:18.334320] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.583 [2024-11-20 11:29:18.388501] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.583 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.841 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.841 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.841 "name": "raid_bdev1", 00:17:10.841 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:10.841 "strip_size_kb": 0, 00:17:10.841 "state": "online", 00:17:10.841 "raid_level": "raid1", 00:17:10.841 "superblock": true, 00:17:10.841 "num_base_bdevs": 2, 00:17:10.841 "num_base_bdevs_discovered": 1, 00:17:10.841 "num_base_bdevs_operational": 1, 00:17:10.841 "base_bdevs_list": [ 00:17:10.841 { 00:17:10.841 "name": null, 00:17:10.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.841 "is_configured": false, 00:17:10.841 "data_offset": 0, 00:17:10.841 "data_size": 63488 00:17:10.841 }, 00:17:10.841 { 00:17:10.841 "name": "BaseBdev2", 00:17:10.841 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:10.841 "is_configured": true, 00:17:10.841 "data_offset": 2048, 00:17:10.841 "data_size": 63488 00:17:10.841 } 00:17:10.841 ] 00:17:10.841 }' 00:17:10.841 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.841 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.100 120.00 IOPS, 360.00 MiB/s [2024-11-20T11:29:18.946Z] 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.100 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.358 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.358 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.358 "name": "raid_bdev1", 00:17:11.358 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:11.358 "strip_size_kb": 0, 00:17:11.358 "state": "online", 00:17:11.358 "raid_level": "raid1", 00:17:11.358 "superblock": true, 00:17:11.358 "num_base_bdevs": 2, 00:17:11.358 "num_base_bdevs_discovered": 1, 00:17:11.358 "num_base_bdevs_operational": 1, 00:17:11.358 "base_bdevs_list": [ 00:17:11.358 { 00:17:11.358 "name": null, 00:17:11.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.359 "is_configured": false, 00:17:11.359 "data_offset": 0, 00:17:11.359 "data_size": 63488 00:17:11.359 }, 00:17:11.359 { 00:17:11.359 "name": "BaseBdev2", 00:17:11.359 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:11.359 "is_configured": true, 00:17:11.359 "data_offset": 2048, 00:17:11.359 "data_size": 63488 00:17:11.359 } 00:17:11.359 ] 00:17:11.359 }' 00:17:11.359 11:29:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.359 [2024-11-20 11:29:19.095738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.359 11:29:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:11.359 [2024-11-20 11:29:19.168788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:11.359 [2024-11-20 11:29:19.171594] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:11.617 [2024-11-20 11:29:19.292015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:11.617 [2024-11-20 11:29:19.292649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:11.875 [2024-11-20 11:29:19.513892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:11.875 [2024-11-20 11:29:19.514499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:12.133 139.67 IOPS, 419.00 MiB/s [2024-11-20T11:29:19.979Z] [2024-11-20 11:29:19.873153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:12.392 [2024-11-20 11:29:20.120787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:12.392 [2024-11-20 11:29:20.121334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.392 "name": "raid_bdev1", 00:17:12.392 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:12.392 "strip_size_kb": 0, 00:17:12.392 "state": "online", 00:17:12.392 "raid_level": "raid1", 00:17:12.392 "superblock": true, 00:17:12.392 "num_base_bdevs": 2, 00:17:12.392 "num_base_bdevs_discovered": 2, 00:17:12.392 "num_base_bdevs_operational": 2, 00:17:12.392 "process": { 00:17:12.392 "type": "rebuild", 00:17:12.392 "target": "spare", 00:17:12.392 "progress": { 00:17:12.392 "blocks": 10240, 00:17:12.392 "percent": 16 00:17:12.392 } 00:17:12.392 }, 00:17:12.392 "base_bdevs_list": [ 00:17:12.392 { 00:17:12.392 "name": "spare", 00:17:12.392 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:12.392 "is_configured": true, 00:17:12.392 "data_offset": 2048, 00:17:12.392 "data_size": 63488 00:17:12.392 }, 00:17:12.392 { 00:17:12.392 "name": "BaseBdev2", 00:17:12.392 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:12.392 "is_configured": true, 00:17:12.392 "data_offset": 2048, 00:17:12.392 "data_size": 63488 00:17:12.392 } 00:17:12.392 ] 00:17:12.392 }' 00:17:12.392 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:12.651 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.651 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.651 "name": "raid_bdev1", 00:17:12.651 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:12.651 "strip_size_kb": 0, 00:17:12.651 "state": "online", 00:17:12.651 "raid_level": "raid1", 00:17:12.651 "superblock": true, 00:17:12.651 "num_base_bdevs": 2, 00:17:12.651 "num_base_bdevs_discovered": 2, 00:17:12.651 "num_base_bdevs_operational": 2, 00:17:12.651 "process": { 00:17:12.651 "type": "rebuild", 00:17:12.651 "target": "spare", 00:17:12.651 "progress": { 00:17:12.651 "blocks": 10240, 00:17:12.651 "percent": 16 00:17:12.651 } 00:17:12.651 }, 00:17:12.651 "base_bdevs_list": [ 00:17:12.651 { 00:17:12.651 "name": "spare", 00:17:12.652 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:12.652 "is_configured": true, 00:17:12.652 "data_offset": 2048, 00:17:12.652 "data_size": 63488 00:17:12.652 }, 00:17:12.652 { 00:17:12.652 "name": "BaseBdev2", 00:17:12.652 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:12.652 "is_configured": true, 00:17:12.652 "data_offset": 2048, 00:17:12.652 "data_size": 63488 00:17:12.652 } 00:17:12.652 ] 00:17:12.652 }' 00:17:12.652 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.652 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.652 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.652 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.652 11:29:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.911 [2024-11-20 11:29:20.554232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:12.911 [2024-11-20 11:29:20.554589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:13.169 121.00 IOPS, 363.00 MiB/s [2024-11-20T11:29:21.015Z] [2024-11-20 11:29:20.904000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:13.737 [2024-11-20 11:29:21.364394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.737 "name": "raid_bdev1", 00:17:13.737 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:13.737 "strip_size_kb": 0, 00:17:13.737 "state": "online", 00:17:13.737 "raid_level": "raid1", 00:17:13.737 "superblock": true, 00:17:13.737 "num_base_bdevs": 2, 00:17:13.737 "num_base_bdevs_discovered": 2, 00:17:13.737 "num_base_bdevs_operational": 2, 00:17:13.737 "process": { 00:17:13.737 "type": "rebuild", 00:17:13.737 "target": "spare", 00:17:13.737 "progress": { 00:17:13.737 "blocks": 28672, 00:17:13.737 "percent": 45 00:17:13.737 } 00:17:13.737 }, 00:17:13.737 "base_bdevs_list": [ 00:17:13.737 { 00:17:13.737 "name": "spare", 00:17:13.737 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:13.737 "is_configured": true, 00:17:13.737 "data_offset": 2048, 00:17:13.737 "data_size": 63488 00:17:13.737 }, 00:17:13.737 { 00:17:13.737 "name": "BaseBdev2", 00:17:13.737 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:13.737 "is_configured": true, 00:17:13.737 "data_offset": 2048, 00:17:13.737 "data_size": 63488 00:17:13.737 } 00:17:13.737 ] 00:17:13.737 }' 00:17:13.737 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.997 109.60 IOPS, 328.80 MiB/s [2024-11-20T11:29:21.843Z] 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.997 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.997 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.997 11:29:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.257 [2024-11-20 11:29:22.063396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:14.825 99.00 IOPS, 297.00 MiB/s [2024-11-20T11:29:22.671Z] 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.825 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.085 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.085 [2024-11-20 11:29:22.689396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:15.085 [2024-11-20 11:29:22.690322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:15.085 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.085 "name": "raid_bdev1", 00:17:15.085 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:15.085 "strip_size_kb": 0, 00:17:15.085 "state": "online", 00:17:15.085 "raid_level": "raid1", 00:17:15.085 "superblock": true, 00:17:15.085 "num_base_bdevs": 2, 00:17:15.085 "num_base_bdevs_discovered": 2, 00:17:15.085 "num_base_bdevs_operational": 2, 00:17:15.085 "process": { 00:17:15.085 "type": "rebuild", 00:17:15.085 "target": "spare", 00:17:15.085 "progress": { 00:17:15.085 "blocks": 49152, 00:17:15.085 "percent": 77 00:17:15.085 } 00:17:15.085 }, 00:17:15.085 "base_bdevs_list": [ 00:17:15.085 { 00:17:15.085 "name": "spare", 00:17:15.085 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:15.085 "is_configured": true, 00:17:15.085 "data_offset": 2048, 00:17:15.085 "data_size": 63488 00:17:15.085 }, 00:17:15.085 { 00:17:15.085 "name": "BaseBdev2", 00:17:15.085 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:15.085 "is_configured": true, 00:17:15.085 "data_offset": 2048, 00:17:15.085 "data_size": 63488 00:17:15.085 } 00:17:15.085 ] 00:17:15.085 }' 00:17:15.085 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.085 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.085 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.085 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.085 11:29:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.085 [2024-11-20 11:29:22.919051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:15.085 [2024-11-20 11:29:22.919551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:16.021 [2024-11-20 11:29:23.577496] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:16.021 90.14 IOPS, 270.43 MiB/s [2024-11-20T11:29:23.867Z] [2024-11-20 11:29:23.669174] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:16.021 [2024-11-20 11:29:23.671660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.021 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.021 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.021 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.021 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.021 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.021 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.281 "name": "raid_bdev1", 00:17:16.281 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:16.281 "strip_size_kb": 0, 00:17:16.281 "state": "online", 00:17:16.281 "raid_level": "raid1", 00:17:16.281 "superblock": true, 00:17:16.281 "num_base_bdevs": 2, 00:17:16.281 "num_base_bdevs_discovered": 2, 00:17:16.281 "num_base_bdevs_operational": 2, 00:17:16.281 "base_bdevs_list": [ 00:17:16.281 { 00:17:16.281 "name": "spare", 00:17:16.281 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:16.281 "is_configured": true, 00:17:16.281 "data_offset": 2048, 00:17:16.281 "data_size": 63488 00:17:16.281 }, 00:17:16.281 { 00:17:16.281 "name": "BaseBdev2", 00:17:16.281 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:16.281 "is_configured": true, 00:17:16.281 "data_offset": 2048, 00:17:16.281 "data_size": 63488 00:17:16.281 } 00:17:16.281 ] 00:17:16.281 }' 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:16.281 11:29:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.281 "name": "raid_bdev1", 00:17:16.281 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:16.281 "strip_size_kb": 0, 00:17:16.281 "state": "online", 00:17:16.281 "raid_level": "raid1", 00:17:16.281 "superblock": true, 00:17:16.281 "num_base_bdevs": 2, 00:17:16.281 "num_base_bdevs_discovered": 2, 00:17:16.281 "num_base_bdevs_operational": 2, 00:17:16.281 "base_bdevs_list": [ 00:17:16.281 { 00:17:16.281 "name": "spare", 00:17:16.281 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:16.281 "is_configured": true, 00:17:16.281 "data_offset": 2048, 00:17:16.281 "data_size": 63488 00:17:16.281 }, 00:17:16.281 { 00:17:16.281 "name": "BaseBdev2", 00:17:16.281 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:16.281 "is_configured": true, 00:17:16.281 "data_offset": 2048, 00:17:16.281 "data_size": 63488 00:17:16.281 } 00:17:16.281 ] 00:17:16.281 }' 00:17:16.281 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.540 "name": "raid_bdev1", 00:17:16.540 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:16.540 "strip_size_kb": 0, 00:17:16.540 "state": "online", 00:17:16.540 "raid_level": "raid1", 00:17:16.540 "superblock": true, 00:17:16.540 "num_base_bdevs": 2, 00:17:16.540 "num_base_bdevs_discovered": 2, 00:17:16.540 "num_base_bdevs_operational": 2, 00:17:16.540 "base_bdevs_list": [ 00:17:16.540 { 00:17:16.540 "name": "spare", 00:17:16.540 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:16.540 "is_configured": true, 00:17:16.540 "data_offset": 2048, 00:17:16.540 "data_size": 63488 00:17:16.540 }, 00:17:16.540 { 00:17:16.540 "name": "BaseBdev2", 00:17:16.540 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:16.540 "is_configured": true, 00:17:16.540 "data_offset": 2048, 00:17:16.540 "data_size": 63488 00:17:16.540 } 00:17:16.540 ] 00:17:16.540 }' 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.540 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.058 82.75 IOPS, 248.25 MiB/s [2024-11-20T11:29:24.904Z] 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.058 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.058 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.058 [2024-11-20 11:29:24.697798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.058 [2024-11-20 11:29:24.697842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.058 00:17:17.058 Latency(us) 00:17:17.058 [2024-11-20T11:29:24.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.058 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:17.058 raid_bdev1 : 8.19 81.33 244.00 0.00 0.00 16965.73 284.86 129642.12 00:17:17.058 [2024-11-20T11:29:24.904Z] =================================================================================================================== 00:17:17.058 [2024-11-20T11:29:24.904Z] Total : 81.33 244.00 0.00 0.00 16965.73 284.86 129642.12 00:17:17.058 [2024-11-20 11:29:24.790306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.058 [2024-11-20 11:29:24.790385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.058 [2024-11-20 11:29:24.790524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.058 [2024-11-20 11:29:24.790545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:17.058 { 00:17:17.058 "results": [ 00:17:17.058 { 00:17:17.059 "job": "raid_bdev1", 00:17:17.059 "core_mask": "0x1", 00:17:17.059 "workload": "randrw", 00:17:17.059 "percentage": 50, 00:17:17.059 "status": "finished", 00:17:17.059 "queue_depth": 2, 00:17:17.059 "io_size": 3145728, 00:17:17.059 "runtime": 8.188449, 00:17:17.059 "iops": 81.33408414707108, 00:17:17.059 "mibps": 244.00225244121324, 00:17:17.059 "io_failed": 0, 00:17:17.059 "io_timeout": 0, 00:17:17.059 "avg_latency_us": 16965.73291837292, 00:17:17.059 "min_latency_us": 284.85818181818183, 00:17:17.059 "max_latency_us": 129642.12363636364 00:17:17.059 } 00:17:17.059 ], 00:17:17.059 "core_count": 1 00:17:17.059 } 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.059 11:29:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:17.318 /dev/nbd0 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.577 1+0 records in 00:17:17.577 1+0 records out 00:17:17.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045184 s, 9.1 MB/s 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:17.577 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:17.578 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:17.578 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.578 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:17.836 /dev/nbd1 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.836 1+0 records in 00:17:17.836 1+0 records out 00:17:17.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683284 s, 6.0 MB/s 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.836 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:18.095 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:18.095 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.095 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:18.095 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.095 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:18.095 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.095 11:29:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:18.354 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.612 [2024-11-20 11:29:26.307013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.612 [2024-11-20 11:29:26.307090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.612 [2024-11-20 11:29:26.307120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:18.612 [2024-11-20 11:29:26.307145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.612 [2024-11-20 11:29:26.310337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.612 [2024-11-20 11:29:26.310388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.612 [2024-11-20 11:29:26.310505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:18.612 [2024-11-20 11:29:26.310597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.612 [2024-11-20 11:29:26.310818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.612 spare 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.612 [2024-11-20 11:29:26.410953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:18.612 [2024-11-20 11:29:26.410978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:18.612 [2024-11-20 11:29:26.411259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:17:18.612 [2024-11-20 11:29:26.411452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:18.612 [2024-11-20 11:29:26.411474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:18.612 [2024-11-20 11:29:26.411713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.612 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.613 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.613 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.613 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.613 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.613 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.872 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.872 "name": "raid_bdev1", 00:17:18.872 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:18.872 "strip_size_kb": 0, 00:17:18.872 "state": "online", 00:17:18.872 "raid_level": "raid1", 00:17:18.872 "superblock": true, 00:17:18.872 "num_base_bdevs": 2, 00:17:18.872 "num_base_bdevs_discovered": 2, 00:17:18.872 "num_base_bdevs_operational": 2, 00:17:18.872 "base_bdevs_list": [ 00:17:18.872 { 00:17:18.872 "name": "spare", 00:17:18.872 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:18.872 "is_configured": true, 00:17:18.872 "data_offset": 2048, 00:17:18.872 "data_size": 63488 00:17:18.872 }, 00:17:18.872 { 00:17:18.872 "name": "BaseBdev2", 00:17:18.872 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:18.872 "is_configured": true, 00:17:18.872 "data_offset": 2048, 00:17:18.872 "data_size": 63488 00:17:18.872 } 00:17:18.872 ] 00:17:18.872 }' 00:17:18.872 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.872 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.131 "name": "raid_bdev1", 00:17:19.131 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:19.131 "strip_size_kb": 0, 00:17:19.131 "state": "online", 00:17:19.131 "raid_level": "raid1", 00:17:19.131 "superblock": true, 00:17:19.131 "num_base_bdevs": 2, 00:17:19.131 "num_base_bdevs_discovered": 2, 00:17:19.131 "num_base_bdevs_operational": 2, 00:17:19.131 "base_bdevs_list": [ 00:17:19.131 { 00:17:19.131 "name": "spare", 00:17:19.131 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:19.131 "is_configured": true, 00:17:19.131 "data_offset": 2048, 00:17:19.131 "data_size": 63488 00:17:19.131 }, 00:17:19.131 { 00:17:19.131 "name": "BaseBdev2", 00:17:19.131 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:19.131 "is_configured": true, 00:17:19.131 "data_offset": 2048, 00:17:19.131 "data_size": 63488 00:17:19.131 } 00:17:19.131 ] 00:17:19.131 }' 00:17:19.131 11:29:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.391 [2024-11-20 11:29:27.100474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.391 "name": "raid_bdev1", 00:17:19.391 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:19.391 "strip_size_kb": 0, 00:17:19.391 "state": "online", 00:17:19.391 "raid_level": "raid1", 00:17:19.391 "superblock": true, 00:17:19.391 "num_base_bdevs": 2, 00:17:19.391 "num_base_bdevs_discovered": 1, 00:17:19.391 "num_base_bdevs_operational": 1, 00:17:19.391 "base_bdevs_list": [ 00:17:19.391 { 00:17:19.391 "name": null, 00:17:19.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.391 "is_configured": false, 00:17:19.391 "data_offset": 0, 00:17:19.391 "data_size": 63488 00:17:19.391 }, 00:17:19.391 { 00:17:19.391 "name": "BaseBdev2", 00:17:19.391 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:19.391 "is_configured": true, 00:17:19.391 "data_offset": 2048, 00:17:19.391 "data_size": 63488 00:17:19.391 } 00:17:19.391 ] 00:17:19.391 }' 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.391 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.959 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.959 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.959 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.959 [2024-11-20 11:29:27.564742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.959 [2024-11-20 11:29:27.565014] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.959 [2024-11-20 11:29:27.565062] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:19.959 [2024-11-20 11:29:27.565115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.959 [2024-11-20 11:29:27.580607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:17:19.959 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.959 11:29:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:19.959 [2024-11-20 11:29:27.583165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.896 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.896 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.896 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.896 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.896 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.896 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.896 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.897 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.897 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.897 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.897 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.897 "name": "raid_bdev1", 00:17:20.897 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:20.897 "strip_size_kb": 0, 00:17:20.897 "state": "online", 00:17:20.897 "raid_level": "raid1", 00:17:20.897 "superblock": true, 00:17:20.897 "num_base_bdevs": 2, 00:17:20.897 "num_base_bdevs_discovered": 2, 00:17:20.897 "num_base_bdevs_operational": 2, 00:17:20.897 "process": { 00:17:20.897 "type": "rebuild", 00:17:20.897 "target": "spare", 00:17:20.897 "progress": { 00:17:20.897 "blocks": 20480, 00:17:20.897 "percent": 32 00:17:20.897 } 00:17:20.897 }, 00:17:20.897 "base_bdevs_list": [ 00:17:20.897 { 00:17:20.897 "name": "spare", 00:17:20.897 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:20.897 "is_configured": true, 00:17:20.897 "data_offset": 2048, 00:17:20.897 "data_size": 63488 00:17:20.897 }, 00:17:20.897 { 00:17:20.897 "name": "BaseBdev2", 00:17:20.897 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:20.897 "is_configured": true, 00:17:20.897 "data_offset": 2048, 00:17:20.897 "data_size": 63488 00:17:20.897 } 00:17:20.897 ] 00:17:20.897 }' 00:17:20.897 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.897 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.897 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.158 [2024-11-20 11:29:28.752929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.158 [2024-11-20 11:29:28.792094] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.158 [2024-11-20 11:29:28.792173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.158 [2024-11-20 11:29:28.792199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.158 [2024-11-20 11:29:28.792209] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.158 "name": "raid_bdev1", 00:17:21.158 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:21.158 "strip_size_kb": 0, 00:17:21.158 "state": "online", 00:17:21.158 "raid_level": "raid1", 00:17:21.158 "superblock": true, 00:17:21.158 "num_base_bdevs": 2, 00:17:21.158 "num_base_bdevs_discovered": 1, 00:17:21.158 "num_base_bdevs_operational": 1, 00:17:21.158 "base_bdevs_list": [ 00:17:21.158 { 00:17:21.158 "name": null, 00:17:21.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.158 "is_configured": false, 00:17:21.158 "data_offset": 0, 00:17:21.158 "data_size": 63488 00:17:21.158 }, 00:17:21.158 { 00:17:21.158 "name": "BaseBdev2", 00:17:21.158 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:21.158 "is_configured": true, 00:17:21.158 "data_offset": 2048, 00:17:21.158 "data_size": 63488 00:17:21.158 } 00:17:21.158 ] 00:17:21.158 }' 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.158 11:29:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.727 11:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:21.727 11:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.727 11:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.727 [2024-11-20 11:29:29.380663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:21.727 [2024-11-20 11:29:29.380744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.727 [2024-11-20 11:29:29.380783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:21.727 [2024-11-20 11:29:29.380800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.727 [2024-11-20 11:29:29.381415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.727 [2024-11-20 11:29:29.381447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:21.727 [2024-11-20 11:29:29.381577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:21.727 [2024-11-20 11:29:29.381604] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.727 [2024-11-20 11:29:29.381670] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:21.727 [2024-11-20 11:29:29.381700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.727 [2024-11-20 11:29:29.397930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:17:21.727 spare 00:17:21.727 11:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.727 11:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:21.727 [2024-11-20 11:29:29.400462] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.664 "name": "raid_bdev1", 00:17:22.664 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:22.664 "strip_size_kb": 0, 00:17:22.664 "state": "online", 00:17:22.664 "raid_level": "raid1", 00:17:22.664 "superblock": true, 00:17:22.664 "num_base_bdevs": 2, 00:17:22.664 "num_base_bdevs_discovered": 2, 00:17:22.664 "num_base_bdevs_operational": 2, 00:17:22.664 "process": { 00:17:22.664 "type": "rebuild", 00:17:22.664 "target": "spare", 00:17:22.664 "progress": { 00:17:22.664 "blocks": 20480, 00:17:22.664 "percent": 32 00:17:22.664 } 00:17:22.664 }, 00:17:22.664 "base_bdevs_list": [ 00:17:22.664 { 00:17:22.664 "name": "spare", 00:17:22.664 "uuid": "e459e6f1-1a03-5a9e-b69f-475c7cc56eef", 00:17:22.664 "is_configured": true, 00:17:22.664 "data_offset": 2048, 00:17:22.664 "data_size": 63488 00:17:22.664 }, 00:17:22.664 { 00:17:22.664 "name": "BaseBdev2", 00:17:22.664 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:22.664 "is_configured": true, 00:17:22.664 "data_offset": 2048, 00:17:22.664 "data_size": 63488 00:17:22.664 } 00:17:22.664 ] 00:17:22.664 }' 00:17:22.664 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.923 [2024-11-20 11:29:30.569811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.923 [2024-11-20 11:29:30.609575] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:22.923 [2024-11-20 11:29:30.609681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.923 [2024-11-20 11:29:30.609705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:22.923 [2024-11-20 11:29:30.609721] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.923 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.923 "name": "raid_bdev1", 00:17:22.923 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:22.923 "strip_size_kb": 0, 00:17:22.923 "state": "online", 00:17:22.923 "raid_level": "raid1", 00:17:22.923 "superblock": true, 00:17:22.923 "num_base_bdevs": 2, 00:17:22.923 "num_base_bdevs_discovered": 1, 00:17:22.923 "num_base_bdevs_operational": 1, 00:17:22.923 "base_bdevs_list": [ 00:17:22.923 { 00:17:22.923 "name": null, 00:17:22.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.923 "is_configured": false, 00:17:22.923 "data_offset": 0, 00:17:22.923 "data_size": 63488 00:17:22.923 }, 00:17:22.923 { 00:17:22.923 "name": "BaseBdev2", 00:17:22.923 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:22.923 "is_configured": true, 00:17:22.923 "data_offset": 2048, 00:17:22.923 "data_size": 63488 00:17:22.923 } 00:17:22.923 ] 00:17:22.923 }' 00:17:22.924 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.924 11:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.492 "name": "raid_bdev1", 00:17:23.492 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:23.492 "strip_size_kb": 0, 00:17:23.492 "state": "online", 00:17:23.492 "raid_level": "raid1", 00:17:23.492 "superblock": true, 00:17:23.492 "num_base_bdevs": 2, 00:17:23.492 "num_base_bdevs_discovered": 1, 00:17:23.492 "num_base_bdevs_operational": 1, 00:17:23.492 "base_bdevs_list": [ 00:17:23.492 { 00:17:23.492 "name": null, 00:17:23.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.492 "is_configured": false, 00:17:23.492 "data_offset": 0, 00:17:23.492 "data_size": 63488 00:17:23.492 }, 00:17:23.492 { 00:17:23.492 "name": "BaseBdev2", 00:17:23.492 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:23.492 "is_configured": true, 00:17:23.492 "data_offset": 2048, 00:17:23.492 "data_size": 63488 00:17:23.492 } 00:17:23.492 ] 00:17:23.492 }' 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.492 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.751 [2024-11-20 11:29:31.352528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:23.751 [2024-11-20 11:29:31.352608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.751 [2024-11-20 11:29:31.352671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:23.751 [2024-11-20 11:29:31.352693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.751 [2024-11-20 11:29:31.353315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.751 [2024-11-20 11:29:31.353354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:23.751 [2024-11-20 11:29:31.353501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:23.751 [2024-11-20 11:29:31.353528] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.751 [2024-11-20 11:29:31.353540] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:23.751 [2024-11-20 11:29:31.353555] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:23.751 BaseBdev1 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.751 11:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.688 "name": "raid_bdev1", 00:17:24.688 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:24.688 "strip_size_kb": 0, 00:17:24.688 "state": "online", 00:17:24.688 "raid_level": "raid1", 00:17:24.688 "superblock": true, 00:17:24.688 "num_base_bdevs": 2, 00:17:24.688 "num_base_bdevs_discovered": 1, 00:17:24.688 "num_base_bdevs_operational": 1, 00:17:24.688 "base_bdevs_list": [ 00:17:24.688 { 00:17:24.688 "name": null, 00:17:24.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.688 "is_configured": false, 00:17:24.688 "data_offset": 0, 00:17:24.688 "data_size": 63488 00:17:24.688 }, 00:17:24.688 { 00:17:24.688 "name": "BaseBdev2", 00:17:24.688 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:24.688 "is_configured": true, 00:17:24.688 "data_offset": 2048, 00:17:24.688 "data_size": 63488 00:17:24.688 } 00:17:24.688 ] 00:17:24.688 }' 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.688 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.274 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.275 "name": "raid_bdev1", 00:17:25.275 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:25.275 "strip_size_kb": 0, 00:17:25.275 "state": "online", 00:17:25.275 "raid_level": "raid1", 00:17:25.275 "superblock": true, 00:17:25.275 "num_base_bdevs": 2, 00:17:25.275 "num_base_bdevs_discovered": 1, 00:17:25.275 "num_base_bdevs_operational": 1, 00:17:25.275 "base_bdevs_list": [ 00:17:25.275 { 00:17:25.275 "name": null, 00:17:25.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.275 "is_configured": false, 00:17:25.275 "data_offset": 0, 00:17:25.275 "data_size": 63488 00:17:25.275 }, 00:17:25.275 { 00:17:25.275 "name": "BaseBdev2", 00:17:25.275 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:25.275 "is_configured": true, 00:17:25.275 "data_offset": 2048, 00:17:25.275 "data_size": 63488 00:17:25.275 } 00:17:25.275 ] 00:17:25.275 }' 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.275 11:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.275 [2024-11-20 11:29:33.061456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.275 [2024-11-20 11:29:33.061721] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.275 [2024-11-20 11:29:33.061741] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.275 request: 00:17:25.275 { 00:17:25.275 "base_bdev": "BaseBdev1", 00:17:25.275 "raid_bdev": "raid_bdev1", 00:17:25.275 "method": "bdev_raid_add_base_bdev", 00:17:25.275 "req_id": 1 00:17:25.275 } 00:17:25.275 Got JSON-RPC error response 00:17:25.275 response: 00:17:25.275 { 00:17:25.275 "code": -22, 00:17:25.275 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:25.275 } 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.275 11:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.653 "name": "raid_bdev1", 00:17:26.653 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:26.653 "strip_size_kb": 0, 00:17:26.653 "state": "online", 00:17:26.653 "raid_level": "raid1", 00:17:26.653 "superblock": true, 00:17:26.653 "num_base_bdevs": 2, 00:17:26.653 "num_base_bdevs_discovered": 1, 00:17:26.653 "num_base_bdevs_operational": 1, 00:17:26.653 "base_bdevs_list": [ 00:17:26.653 { 00:17:26.653 "name": null, 00:17:26.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.653 "is_configured": false, 00:17:26.653 "data_offset": 0, 00:17:26.653 "data_size": 63488 00:17:26.653 }, 00:17:26.653 { 00:17:26.653 "name": "BaseBdev2", 00:17:26.653 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:26.653 "is_configured": true, 00:17:26.653 "data_offset": 2048, 00:17:26.653 "data_size": 63488 00:17:26.653 } 00:17:26.653 ] 00:17:26.653 }' 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.653 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.937 "name": "raid_bdev1", 00:17:26.937 "uuid": "45a996a8-b43a-4f0a-acf4-082ab990a17f", 00:17:26.937 "strip_size_kb": 0, 00:17:26.937 "state": "online", 00:17:26.937 "raid_level": "raid1", 00:17:26.937 "superblock": true, 00:17:26.937 "num_base_bdevs": 2, 00:17:26.937 "num_base_bdevs_discovered": 1, 00:17:26.937 "num_base_bdevs_operational": 1, 00:17:26.937 "base_bdevs_list": [ 00:17:26.937 { 00:17:26.937 "name": null, 00:17:26.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.937 "is_configured": false, 00:17:26.937 "data_offset": 0, 00:17:26.937 "data_size": 63488 00:17:26.937 }, 00:17:26.937 { 00:17:26.937 "name": "BaseBdev2", 00:17:26.937 "uuid": "63b96de7-3bde-56b0-97df-74b136e01644", 00:17:26.937 "is_configured": true, 00:17:26.937 "data_offset": 2048, 00:17:26.937 "data_size": 63488 00:17:26.937 } 00:17:26.937 ] 00:17:26.937 }' 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.937 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77003 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77003 ']' 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77003 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77003 00:17:27.199 killing process with pid 77003 00:17:27.199 Received shutdown signal, test time was about 18.235058 seconds 00:17:27.199 00:17:27.199 Latency(us) 00:17:27.199 [2024-11-20T11:29:35.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.199 [2024-11-20T11:29:35.045Z] =================================================================================================================== 00:17:27.199 [2024-11-20T11:29:35.045Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77003' 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77003 00:17:27.199 [2024-11-20 11:29:34.816340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.199 11:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77003 00:17:27.199 [2024-11-20 11:29:34.816500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.199 [2024-11-20 11:29:34.816580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.199 [2024-11-20 11:29:34.816596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:27.199 [2024-11-20 11:29:35.029436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:28.578 00:17:28.578 real 0m21.574s 00:17:28.578 user 0m29.422s 00:17:28.578 sys 0m2.044s 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.578 ************************************ 00:17:28.578 END TEST raid_rebuild_test_sb_io 00:17:28.578 ************************************ 00:17:28.578 11:29:36 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:28.578 11:29:36 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:17:28.578 11:29:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:28.578 11:29:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.578 11:29:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.578 ************************************ 00:17:28.578 START TEST raid_rebuild_test 00:17:28.578 ************************************ 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77703 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77703 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77703 ']' 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.578 11:29:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.578 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:28.578 Zero copy mechanism will not be used. 00:17:28.578 [2024-11-20 11:29:36.252007] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:17:28.578 [2024-11-20 11:29:36.252186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77703 ] 00:17:28.837 [2024-11-20 11:29:36.436464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.837 [2024-11-20 11:29:36.566606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.096 [2024-11-20 11:29:36.772159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.096 [2024-11-20 11:29:36.772218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.664 BaseBdev1_malloc 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.664 [2024-11-20 11:29:37.358592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:29.664 [2024-11-20 11:29:37.358680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.664 [2024-11-20 11:29:37.358715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:29.664 [2024-11-20 11:29:37.358733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.664 [2024-11-20 11:29:37.361456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.664 [2024-11-20 11:29:37.361502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:29.664 BaseBdev1 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.664 BaseBdev2_malloc 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.664 [2024-11-20 11:29:37.415132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:29.664 [2024-11-20 11:29:37.415202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.664 [2024-11-20 11:29:37.415230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:29.664 [2024-11-20 11:29:37.415248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.664 [2024-11-20 11:29:37.418057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.664 [2024-11-20 11:29:37.418129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:29.664 BaseBdev2 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.664 BaseBdev3_malloc 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.664 [2024-11-20 11:29:37.483003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:29.664 [2024-11-20 11:29:37.483070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.664 [2024-11-20 11:29:37.483101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:29.664 [2024-11-20 11:29:37.483120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.664 [2024-11-20 11:29:37.485898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.664 [2024-11-20 11:29:37.485947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:29.664 BaseBdev3 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.664 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 BaseBdev4_malloc 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 [2024-11-20 11:29:37.541628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:29.924 [2024-11-20 11:29:37.541695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.924 [2024-11-20 11:29:37.541723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:29.924 [2024-11-20 11:29:37.541741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.924 [2024-11-20 11:29:37.544414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.924 [2024-11-20 11:29:37.544466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:29.924 BaseBdev4 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 spare_malloc 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 spare_delay 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 [2024-11-20 11:29:37.603688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:29.924 [2024-11-20 11:29:37.603777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.924 [2024-11-20 11:29:37.603806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:29.924 [2024-11-20 11:29:37.603823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.924 [2024-11-20 11:29:37.606698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.924 [2024-11-20 11:29:37.606746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:29.924 spare 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 [2024-11-20 11:29:37.611740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.924 [2024-11-20 11:29:37.614355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.924 [2024-11-20 11:29:37.614454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.924 [2024-11-20 11:29:37.614533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:29.924 [2024-11-20 11:29:37.614662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:29.924 [2024-11-20 11:29:37.614686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:29.924 [2024-11-20 11:29:37.614995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:29.924 [2024-11-20 11:29:37.615229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:29.924 [2024-11-20 11:29:37.615257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:29.924 [2024-11-20 11:29:37.615444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.924 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.924 "name": "raid_bdev1", 00:17:29.924 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:29.924 "strip_size_kb": 0, 00:17:29.924 "state": "online", 00:17:29.924 "raid_level": "raid1", 00:17:29.924 "superblock": false, 00:17:29.924 "num_base_bdevs": 4, 00:17:29.924 "num_base_bdevs_discovered": 4, 00:17:29.924 "num_base_bdevs_operational": 4, 00:17:29.924 "base_bdevs_list": [ 00:17:29.924 { 00:17:29.924 "name": "BaseBdev1", 00:17:29.924 "uuid": "a7df1271-9218-5d01-b795-692fdb6a13e4", 00:17:29.924 "is_configured": true, 00:17:29.924 "data_offset": 0, 00:17:29.924 "data_size": 65536 00:17:29.925 }, 00:17:29.925 { 00:17:29.925 "name": "BaseBdev2", 00:17:29.925 "uuid": "ad1d9ce2-2d27-5312-9a06-b10950d1ddde", 00:17:29.925 "is_configured": true, 00:17:29.925 "data_offset": 0, 00:17:29.925 "data_size": 65536 00:17:29.925 }, 00:17:29.925 { 00:17:29.925 "name": "BaseBdev3", 00:17:29.925 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:29.925 "is_configured": true, 00:17:29.925 "data_offset": 0, 00:17:29.925 "data_size": 65536 00:17:29.925 }, 00:17:29.925 { 00:17:29.925 "name": "BaseBdev4", 00:17:29.925 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:29.925 "is_configured": true, 00:17:29.925 "data_offset": 0, 00:17:29.925 "data_size": 65536 00:17:29.925 } 00:17:29.925 ] 00:17:29.925 }' 00:17:29.925 11:29:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.925 11:29:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.493 [2024-11-20 11:29:38.148362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:30.493 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.494 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:30.494 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.494 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.494 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:30.752 [2024-11-20 11:29:38.484085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:30.752 /dev/nbd0 00:17:30.752 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.752 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.752 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:30.752 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:30.752 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:30.752 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.753 1+0 records in 00:17:30.753 1+0 records out 00:17:30.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343604 s, 11.9 MB/s 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:30.753 11:29:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:40.731 65536+0 records in 00:17:40.731 65536+0 records out 00:17:40.732 33554432 bytes (34 MB, 32 MiB) copied, 8.529 s, 3.9 MB/s 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:40.732 [2024-11-20 11:29:47.361253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.732 [2024-11-20 11:29:47.393301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.732 "name": "raid_bdev1", 00:17:40.732 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:40.732 "strip_size_kb": 0, 00:17:40.732 "state": "online", 00:17:40.732 "raid_level": "raid1", 00:17:40.732 "superblock": false, 00:17:40.732 "num_base_bdevs": 4, 00:17:40.732 "num_base_bdevs_discovered": 3, 00:17:40.732 "num_base_bdevs_operational": 3, 00:17:40.732 "base_bdevs_list": [ 00:17:40.732 { 00:17:40.732 "name": null, 00:17:40.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.732 "is_configured": false, 00:17:40.732 "data_offset": 0, 00:17:40.732 "data_size": 65536 00:17:40.732 }, 00:17:40.732 { 00:17:40.732 "name": "BaseBdev2", 00:17:40.732 "uuid": "ad1d9ce2-2d27-5312-9a06-b10950d1ddde", 00:17:40.732 "is_configured": true, 00:17:40.732 "data_offset": 0, 00:17:40.732 "data_size": 65536 00:17:40.732 }, 00:17:40.732 { 00:17:40.732 "name": "BaseBdev3", 00:17:40.732 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:40.732 "is_configured": true, 00:17:40.732 "data_offset": 0, 00:17:40.732 "data_size": 65536 00:17:40.732 }, 00:17:40.732 { 00:17:40.732 "name": "BaseBdev4", 00:17:40.732 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:40.732 "is_configured": true, 00:17:40.732 "data_offset": 0, 00:17:40.732 "data_size": 65536 00:17:40.732 } 00:17:40.732 ] 00:17:40.732 }' 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.732 [2024-11-20 11:29:47.925447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.732 [2024-11-20 11:29:47.940388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.732 11:29:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:40.732 [2024-11-20 11:29:47.943037] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.301 "name": "raid_bdev1", 00:17:41.301 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:41.301 "strip_size_kb": 0, 00:17:41.301 "state": "online", 00:17:41.301 "raid_level": "raid1", 00:17:41.301 "superblock": false, 00:17:41.301 "num_base_bdevs": 4, 00:17:41.301 "num_base_bdevs_discovered": 4, 00:17:41.301 "num_base_bdevs_operational": 4, 00:17:41.301 "process": { 00:17:41.301 "type": "rebuild", 00:17:41.301 "target": "spare", 00:17:41.301 "progress": { 00:17:41.301 "blocks": 20480, 00:17:41.301 "percent": 31 00:17:41.301 } 00:17:41.301 }, 00:17:41.301 "base_bdevs_list": [ 00:17:41.301 { 00:17:41.301 "name": "spare", 00:17:41.301 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:41.301 "is_configured": true, 00:17:41.301 "data_offset": 0, 00:17:41.301 "data_size": 65536 00:17:41.301 }, 00:17:41.301 { 00:17:41.301 "name": "BaseBdev2", 00:17:41.301 "uuid": "ad1d9ce2-2d27-5312-9a06-b10950d1ddde", 00:17:41.301 "is_configured": true, 00:17:41.301 "data_offset": 0, 00:17:41.301 "data_size": 65536 00:17:41.301 }, 00:17:41.301 { 00:17:41.301 "name": "BaseBdev3", 00:17:41.301 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:41.301 "is_configured": true, 00:17:41.301 "data_offset": 0, 00:17:41.301 "data_size": 65536 00:17:41.301 }, 00:17:41.301 { 00:17:41.301 "name": "BaseBdev4", 00:17:41.301 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:41.301 "is_configured": true, 00:17:41.301 "data_offset": 0, 00:17:41.301 "data_size": 65536 00:17:41.301 } 00:17:41.301 ] 00:17:41.301 }' 00:17:41.301 11:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.301 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.301 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.301 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.301 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.301 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.301 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.301 [2024-11-20 11:29:49.096287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.561 [2024-11-20 11:29:49.152363] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.561 [2024-11-20 11:29:49.152512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.561 [2024-11-20 11:29:49.152540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.561 [2024-11-20 11:29:49.152556] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.561 "name": "raid_bdev1", 00:17:41.561 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:41.561 "strip_size_kb": 0, 00:17:41.561 "state": "online", 00:17:41.561 "raid_level": "raid1", 00:17:41.561 "superblock": false, 00:17:41.561 "num_base_bdevs": 4, 00:17:41.561 "num_base_bdevs_discovered": 3, 00:17:41.561 "num_base_bdevs_operational": 3, 00:17:41.561 "base_bdevs_list": [ 00:17:41.561 { 00:17:41.561 "name": null, 00:17:41.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.561 "is_configured": false, 00:17:41.561 "data_offset": 0, 00:17:41.561 "data_size": 65536 00:17:41.561 }, 00:17:41.561 { 00:17:41.561 "name": "BaseBdev2", 00:17:41.561 "uuid": "ad1d9ce2-2d27-5312-9a06-b10950d1ddde", 00:17:41.561 "is_configured": true, 00:17:41.561 "data_offset": 0, 00:17:41.561 "data_size": 65536 00:17:41.561 }, 00:17:41.561 { 00:17:41.561 "name": "BaseBdev3", 00:17:41.561 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:41.561 "is_configured": true, 00:17:41.561 "data_offset": 0, 00:17:41.561 "data_size": 65536 00:17:41.561 }, 00:17:41.561 { 00:17:41.561 "name": "BaseBdev4", 00:17:41.561 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:41.561 "is_configured": true, 00:17:41.561 "data_offset": 0, 00:17:41.561 "data_size": 65536 00:17:41.561 } 00:17:41.561 ] 00:17:41.561 }' 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.561 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.130 "name": "raid_bdev1", 00:17:42.130 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:42.130 "strip_size_kb": 0, 00:17:42.130 "state": "online", 00:17:42.130 "raid_level": "raid1", 00:17:42.130 "superblock": false, 00:17:42.130 "num_base_bdevs": 4, 00:17:42.130 "num_base_bdevs_discovered": 3, 00:17:42.130 "num_base_bdevs_operational": 3, 00:17:42.130 "base_bdevs_list": [ 00:17:42.130 { 00:17:42.130 "name": null, 00:17:42.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.130 "is_configured": false, 00:17:42.130 "data_offset": 0, 00:17:42.130 "data_size": 65536 00:17:42.130 }, 00:17:42.130 { 00:17:42.130 "name": "BaseBdev2", 00:17:42.130 "uuid": "ad1d9ce2-2d27-5312-9a06-b10950d1ddde", 00:17:42.130 "is_configured": true, 00:17:42.130 "data_offset": 0, 00:17:42.130 "data_size": 65536 00:17:42.130 }, 00:17:42.130 { 00:17:42.130 "name": "BaseBdev3", 00:17:42.130 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:42.130 "is_configured": true, 00:17:42.130 "data_offset": 0, 00:17:42.130 "data_size": 65536 00:17:42.130 }, 00:17:42.130 { 00:17:42.130 "name": "BaseBdev4", 00:17:42.130 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:42.130 "is_configured": true, 00:17:42.130 "data_offset": 0, 00:17:42.130 "data_size": 65536 00:17:42.130 } 00:17:42.130 ] 00:17:42.130 }' 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.130 [2024-11-20 11:29:49.857852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.130 [2024-11-20 11:29:49.871535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.130 11:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:42.130 [2024-11-20 11:29:49.874262] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.064 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.064 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.064 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.065 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.065 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.065 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.065 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.065 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.065 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.065 11:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.325 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.325 "name": "raid_bdev1", 00:17:43.325 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:43.325 "strip_size_kb": 0, 00:17:43.325 "state": "online", 00:17:43.325 "raid_level": "raid1", 00:17:43.325 "superblock": false, 00:17:43.325 "num_base_bdevs": 4, 00:17:43.325 "num_base_bdevs_discovered": 4, 00:17:43.325 "num_base_bdevs_operational": 4, 00:17:43.325 "process": { 00:17:43.325 "type": "rebuild", 00:17:43.325 "target": "spare", 00:17:43.325 "progress": { 00:17:43.325 "blocks": 20480, 00:17:43.325 "percent": 31 00:17:43.325 } 00:17:43.325 }, 00:17:43.325 "base_bdevs_list": [ 00:17:43.325 { 00:17:43.325 "name": "spare", 00:17:43.325 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:43.325 "is_configured": true, 00:17:43.325 "data_offset": 0, 00:17:43.325 "data_size": 65536 00:17:43.325 }, 00:17:43.325 { 00:17:43.325 "name": "BaseBdev2", 00:17:43.325 "uuid": "ad1d9ce2-2d27-5312-9a06-b10950d1ddde", 00:17:43.325 "is_configured": true, 00:17:43.325 "data_offset": 0, 00:17:43.325 "data_size": 65536 00:17:43.325 }, 00:17:43.325 { 00:17:43.325 "name": "BaseBdev3", 00:17:43.325 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:43.325 "is_configured": true, 00:17:43.325 "data_offset": 0, 00:17:43.325 "data_size": 65536 00:17:43.325 }, 00:17:43.325 { 00:17:43.325 "name": "BaseBdev4", 00:17:43.325 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:43.325 "is_configured": true, 00:17:43.325 "data_offset": 0, 00:17:43.325 "data_size": 65536 00:17:43.325 } 00:17:43.325 ] 00:17:43.325 }' 00:17:43.325 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.325 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.325 11:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 [2024-11-20 11:29:51.047495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.325 [2024-11-20 11:29:51.083498] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.325 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.325 "name": "raid_bdev1", 00:17:43.326 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:43.326 "strip_size_kb": 0, 00:17:43.326 "state": "online", 00:17:43.326 "raid_level": "raid1", 00:17:43.326 "superblock": false, 00:17:43.326 "num_base_bdevs": 4, 00:17:43.326 "num_base_bdevs_discovered": 3, 00:17:43.326 "num_base_bdevs_operational": 3, 00:17:43.326 "process": { 00:17:43.326 "type": "rebuild", 00:17:43.326 "target": "spare", 00:17:43.326 "progress": { 00:17:43.326 "blocks": 24576, 00:17:43.326 "percent": 37 00:17:43.326 } 00:17:43.326 }, 00:17:43.326 "base_bdevs_list": [ 00:17:43.326 { 00:17:43.326 "name": "spare", 00:17:43.326 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:43.326 "is_configured": true, 00:17:43.326 "data_offset": 0, 00:17:43.326 "data_size": 65536 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": null, 00:17:43.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.326 "is_configured": false, 00:17:43.326 "data_offset": 0, 00:17:43.326 "data_size": 65536 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": "BaseBdev3", 00:17:43.326 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:43.326 "is_configured": true, 00:17:43.326 "data_offset": 0, 00:17:43.326 "data_size": 65536 00:17:43.326 }, 00:17:43.326 { 00:17:43.326 "name": "BaseBdev4", 00:17:43.326 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:43.326 "is_configured": true, 00:17:43.326 "data_offset": 0, 00:17:43.326 "data_size": 65536 00:17:43.326 } 00:17:43.326 ] 00:17:43.326 }' 00:17:43.326 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.585 "name": "raid_bdev1", 00:17:43.585 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:43.585 "strip_size_kb": 0, 00:17:43.585 "state": "online", 00:17:43.585 "raid_level": "raid1", 00:17:43.585 "superblock": false, 00:17:43.585 "num_base_bdevs": 4, 00:17:43.585 "num_base_bdevs_discovered": 3, 00:17:43.585 "num_base_bdevs_operational": 3, 00:17:43.585 "process": { 00:17:43.585 "type": "rebuild", 00:17:43.585 "target": "spare", 00:17:43.585 "progress": { 00:17:43.585 "blocks": 26624, 00:17:43.585 "percent": 40 00:17:43.585 } 00:17:43.585 }, 00:17:43.585 "base_bdevs_list": [ 00:17:43.585 { 00:17:43.585 "name": "spare", 00:17:43.585 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:43.585 "is_configured": true, 00:17:43.585 "data_offset": 0, 00:17:43.585 "data_size": 65536 00:17:43.585 }, 00:17:43.585 { 00:17:43.585 "name": null, 00:17:43.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.585 "is_configured": false, 00:17:43.585 "data_offset": 0, 00:17:43.585 "data_size": 65536 00:17:43.585 }, 00:17:43.585 { 00:17:43.585 "name": "BaseBdev3", 00:17:43.585 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:43.585 "is_configured": true, 00:17:43.585 "data_offset": 0, 00:17:43.585 "data_size": 65536 00:17:43.585 }, 00:17:43.585 { 00:17:43.585 "name": "BaseBdev4", 00:17:43.585 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:43.585 "is_configured": true, 00:17:43.585 "data_offset": 0, 00:17:43.585 "data_size": 65536 00:17:43.585 } 00:17:43.585 ] 00:17:43.585 }' 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.585 11:29:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.962 "name": "raid_bdev1", 00:17:44.962 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:44.962 "strip_size_kb": 0, 00:17:44.962 "state": "online", 00:17:44.962 "raid_level": "raid1", 00:17:44.962 "superblock": false, 00:17:44.962 "num_base_bdevs": 4, 00:17:44.962 "num_base_bdevs_discovered": 3, 00:17:44.962 "num_base_bdevs_operational": 3, 00:17:44.962 "process": { 00:17:44.962 "type": "rebuild", 00:17:44.962 "target": "spare", 00:17:44.962 "progress": { 00:17:44.962 "blocks": 51200, 00:17:44.962 "percent": 78 00:17:44.962 } 00:17:44.962 }, 00:17:44.962 "base_bdevs_list": [ 00:17:44.962 { 00:17:44.962 "name": "spare", 00:17:44.962 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:44.962 "is_configured": true, 00:17:44.962 "data_offset": 0, 00:17:44.962 "data_size": 65536 00:17:44.962 }, 00:17:44.962 { 00:17:44.962 "name": null, 00:17:44.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.962 "is_configured": false, 00:17:44.962 "data_offset": 0, 00:17:44.962 "data_size": 65536 00:17:44.962 }, 00:17:44.962 { 00:17:44.962 "name": "BaseBdev3", 00:17:44.962 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:44.962 "is_configured": true, 00:17:44.962 "data_offset": 0, 00:17:44.962 "data_size": 65536 00:17:44.962 }, 00:17:44.962 { 00:17:44.962 "name": "BaseBdev4", 00:17:44.962 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:44.962 "is_configured": true, 00:17:44.962 "data_offset": 0, 00:17:44.962 "data_size": 65536 00:17:44.962 } 00:17:44.962 ] 00:17:44.962 }' 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.962 11:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.531 [2024-11-20 11:29:53.098311] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:45.531 [2024-11-20 11:29:53.098418] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:45.531 [2024-11-20 11:29:53.098525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.853 "name": "raid_bdev1", 00:17:45.853 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:45.853 "strip_size_kb": 0, 00:17:45.853 "state": "online", 00:17:45.853 "raid_level": "raid1", 00:17:45.853 "superblock": false, 00:17:45.853 "num_base_bdevs": 4, 00:17:45.853 "num_base_bdevs_discovered": 3, 00:17:45.853 "num_base_bdevs_operational": 3, 00:17:45.853 "base_bdevs_list": [ 00:17:45.853 { 00:17:45.853 "name": "spare", 00:17:45.853 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:45.853 "is_configured": true, 00:17:45.853 "data_offset": 0, 00:17:45.853 "data_size": 65536 00:17:45.853 }, 00:17:45.853 { 00:17:45.853 "name": null, 00:17:45.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.853 "is_configured": false, 00:17:45.853 "data_offset": 0, 00:17:45.853 "data_size": 65536 00:17:45.853 }, 00:17:45.853 { 00:17:45.853 "name": "BaseBdev3", 00:17:45.853 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:45.853 "is_configured": true, 00:17:45.853 "data_offset": 0, 00:17:45.853 "data_size": 65536 00:17:45.853 }, 00:17:45.853 { 00:17:45.853 "name": "BaseBdev4", 00:17:45.853 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:45.853 "is_configured": true, 00:17:45.853 "data_offset": 0, 00:17:45.853 "data_size": 65536 00:17:45.853 } 00:17:45.853 ] 00:17:45.853 }' 00:17:45.853 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.127 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.128 "name": "raid_bdev1", 00:17:46.128 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:46.128 "strip_size_kb": 0, 00:17:46.128 "state": "online", 00:17:46.128 "raid_level": "raid1", 00:17:46.128 "superblock": false, 00:17:46.128 "num_base_bdevs": 4, 00:17:46.128 "num_base_bdevs_discovered": 3, 00:17:46.128 "num_base_bdevs_operational": 3, 00:17:46.128 "base_bdevs_list": [ 00:17:46.128 { 00:17:46.128 "name": "spare", 00:17:46.128 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:46.128 "is_configured": true, 00:17:46.128 "data_offset": 0, 00:17:46.128 "data_size": 65536 00:17:46.128 }, 00:17:46.128 { 00:17:46.128 "name": null, 00:17:46.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.128 "is_configured": false, 00:17:46.128 "data_offset": 0, 00:17:46.128 "data_size": 65536 00:17:46.128 }, 00:17:46.128 { 00:17:46.128 "name": "BaseBdev3", 00:17:46.128 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:46.128 "is_configured": true, 00:17:46.128 "data_offset": 0, 00:17:46.128 "data_size": 65536 00:17:46.128 }, 00:17:46.128 { 00:17:46.128 "name": "BaseBdev4", 00:17:46.128 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:46.128 "is_configured": true, 00:17:46.128 "data_offset": 0, 00:17:46.128 "data_size": 65536 00:17:46.128 } 00:17:46.128 ] 00:17:46.128 }' 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.128 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.387 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.387 "name": "raid_bdev1", 00:17:46.387 "uuid": "96144d04-fbba-40a8-8b32-6b64dfa79ac0", 00:17:46.387 "strip_size_kb": 0, 00:17:46.387 "state": "online", 00:17:46.387 "raid_level": "raid1", 00:17:46.387 "superblock": false, 00:17:46.387 "num_base_bdevs": 4, 00:17:46.387 "num_base_bdevs_discovered": 3, 00:17:46.387 "num_base_bdevs_operational": 3, 00:17:46.387 "base_bdevs_list": [ 00:17:46.387 { 00:17:46.387 "name": "spare", 00:17:46.387 "uuid": "c51f4324-fd89-5796-b5ec-b868c3c91e40", 00:17:46.387 "is_configured": true, 00:17:46.387 "data_offset": 0, 00:17:46.387 "data_size": 65536 00:17:46.387 }, 00:17:46.387 { 00:17:46.387 "name": null, 00:17:46.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.387 "is_configured": false, 00:17:46.387 "data_offset": 0, 00:17:46.387 "data_size": 65536 00:17:46.387 }, 00:17:46.387 { 00:17:46.387 "name": "BaseBdev3", 00:17:46.387 "uuid": "a5fb5ca7-be11-5bae-b1bf-bb8cfe5b1008", 00:17:46.387 "is_configured": true, 00:17:46.387 "data_offset": 0, 00:17:46.387 "data_size": 65536 00:17:46.387 }, 00:17:46.387 { 00:17:46.387 "name": "BaseBdev4", 00:17:46.387 "uuid": "2dd20ee2-77a5-56f3-8c33-3e5bdd127544", 00:17:46.387 "is_configured": true, 00:17:46.387 "data_offset": 0, 00:17:46.387 "data_size": 65536 00:17:46.387 } 00:17:46.387 ] 00:17:46.387 }' 00:17:46.387 11:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.387 11:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.646 [2024-11-20 11:29:54.452106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.646 [2024-11-20 11:29:54.452146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.646 [2024-11-20 11:29:54.452265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.646 [2024-11-20 11:29:54.452375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.646 [2024-11-20 11:29:54.452407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.646 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:46.904 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:47.163 /dev/nbd0 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.163 1+0 records in 00:17:47.163 1+0 records out 00:17:47.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293338 s, 14.0 MB/s 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.163 11:29:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:47.423 /dev/nbd1 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:47.423 1+0 records in 00:17:47.423 1+0 records out 00:17:47.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424457 s, 9.6 MB/s 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:47.423 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:47.681 11:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:47.681 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:47.681 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:47.681 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:47.681 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:47.681 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.681 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:47.941 11:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77703 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77703 ']' 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77703 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.508 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77703 00:17:48.508 killing process with pid 77703 00:17:48.509 Received shutdown signal, test time was about 60.000000 seconds 00:17:48.509 00:17:48.509 Latency(us) 00:17:48.509 [2024-11-20T11:29:56.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.509 [2024-11-20T11:29:56.355Z] =================================================================================================================== 00:17:48.509 [2024-11-20T11:29:56.355Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.509 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.509 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.509 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77703' 00:17:48.509 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77703 00:17:48.509 [2024-11-20 11:29:56.109724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.509 11:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77703 00:17:48.767 [2024-11-20 11:29:56.566214] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:50.155 00:17:50.155 real 0m21.521s 00:17:50.155 user 0m24.104s 00:17:50.155 sys 0m3.662s 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 ************************************ 00:17:50.155 END TEST raid_rebuild_test 00:17:50.155 ************************************ 00:17:50.155 11:29:57 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:50.155 11:29:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:50.155 11:29:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.155 11:29:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 ************************************ 00:17:50.155 START TEST raid_rebuild_test_sb 00:17:50.155 ************************************ 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78188 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78188 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78188 ']' 00:17:50.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.155 11:29:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.155 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:50.155 Zero copy mechanism will not be used. 00:17:50.155 [2024-11-20 11:29:57.817515] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:17:50.155 [2024-11-20 11:29:57.817695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78188 ] 00:17:50.155 [2024-11-20 11:29:57.993001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.414 [2024-11-20 11:29:58.123557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.672 [2024-11-20 11:29:58.326209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.672 [2024-11-20 11:29:58.326268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 BaseBdev1_malloc 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 [2024-11-20 11:29:58.893669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:51.239 [2024-11-20 11:29:58.893921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.239 [2024-11-20 11:29:58.893964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:51.239 [2024-11-20 11:29:58.893984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.239 [2024-11-20 11:29:58.896874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.239 [2024-11-20 11:29:58.897072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:51.239 BaseBdev1 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 BaseBdev2_malloc 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 [2024-11-20 11:29:58.942851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:51.239 [2024-11-20 11:29:58.943057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.239 [2024-11-20 11:29:58.943094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:51.239 [2024-11-20 11:29:58.943116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.239 [2024-11-20 11:29:58.945913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.239 [2024-11-20 11:29:58.945961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:51.239 BaseBdev2 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 BaseBdev3_malloc 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 [2024-11-20 11:29:59.009575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:51.239 [2024-11-20 11:29:59.009673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.239 [2024-11-20 11:29:59.009709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:51.239 [2024-11-20 11:29:59.009728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.239 [2024-11-20 11:29:59.012627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.239 [2024-11-20 11:29:59.012677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:51.239 BaseBdev3 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 BaseBdev4_malloc 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.239 [2024-11-20 11:29:59.067688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:51.239 [2024-11-20 11:29:59.067943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.239 [2024-11-20 11:29:59.067982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:51.239 [2024-11-20 11:29:59.068001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.239 [2024-11-20 11:29:59.070871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.239 [2024-11-20 11:29:59.070936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:51.239 BaseBdev4 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.239 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.500 spare_malloc 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.501 spare_delay 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.501 [2024-11-20 11:29:59.132198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.501 [2024-11-20 11:29:59.132272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.501 [2024-11-20 11:29:59.132302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:51.501 [2024-11-20 11:29:59.132319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.501 [2024-11-20 11:29:59.135095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.501 [2024-11-20 11:29:59.135277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.501 spare 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.501 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.501 [2024-11-20 11:29:59.144254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.502 [2024-11-20 11:29:59.146720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.502 [2024-11-20 11:29:59.146837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.502 [2024-11-20 11:29:59.146918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.502 [2024-11-20 11:29:59.147150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:51.502 [2024-11-20 11:29:59.147177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:51.502 [2024-11-20 11:29:59.147490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:51.502 [2024-11-20 11:29:59.147740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:51.502 [2024-11-20 11:29:59.147758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:51.502 [2024-11-20 11:29:59.147948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.502 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.503 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.503 "name": "raid_bdev1", 00:17:51.503 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:17:51.503 "strip_size_kb": 0, 00:17:51.503 "state": "online", 00:17:51.503 "raid_level": "raid1", 00:17:51.503 "superblock": true, 00:17:51.503 "num_base_bdevs": 4, 00:17:51.503 "num_base_bdevs_discovered": 4, 00:17:51.503 "num_base_bdevs_operational": 4, 00:17:51.503 "base_bdevs_list": [ 00:17:51.503 { 00:17:51.503 "name": "BaseBdev1", 00:17:51.503 "uuid": "3288401b-07b7-5494-a8b0-8fc36fb23144", 00:17:51.503 "is_configured": true, 00:17:51.503 "data_offset": 2048, 00:17:51.503 "data_size": 63488 00:17:51.503 }, 00:17:51.503 { 00:17:51.503 "name": "BaseBdev2", 00:17:51.503 "uuid": "160cd482-a4b6-55cb-9d85-63f007c7ce35", 00:17:51.503 "is_configured": true, 00:17:51.503 "data_offset": 2048, 00:17:51.503 "data_size": 63488 00:17:51.503 }, 00:17:51.503 { 00:17:51.503 "name": "BaseBdev3", 00:17:51.503 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:17:51.503 "is_configured": true, 00:17:51.503 "data_offset": 2048, 00:17:51.503 "data_size": 63488 00:17:51.503 }, 00:17:51.503 { 00:17:51.503 "name": "BaseBdev4", 00:17:51.504 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:17:51.504 "is_configured": true, 00:17:51.504 "data_offset": 2048, 00:17:51.504 "data_size": 63488 00:17:51.504 } 00:17:51.504 ] 00:17:51.504 }' 00:17:51.504 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.504 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.075 [2024-11-20 11:29:59.668891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:52.075 11:29:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:52.334 [2024-11-20 11:30:00.100609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:52.334 /dev/nbd0 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:52.334 1+0 records in 00:17:52.334 1+0 records out 00:17:52.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350353 s, 11.7 MB/s 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:52.334 11:30:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:00.445 63488+0 records in 00:18:00.445 63488+0 records out 00:18:00.445 32505856 bytes (33 MB, 31 MiB) copied, 8.03165 s, 4.0 MB/s 00:18:00.445 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:00.445 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.445 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:00.445 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:00.445 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:00.445 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.445 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:00.704 [2024-11-20 11:30:08.438780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.704 [2024-11-20 11:30:08.472688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.704 "name": "raid_bdev1", 00:18:00.704 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:00.704 "strip_size_kb": 0, 00:18:00.704 "state": "online", 00:18:00.704 "raid_level": "raid1", 00:18:00.704 "superblock": true, 00:18:00.704 "num_base_bdevs": 4, 00:18:00.704 "num_base_bdevs_discovered": 3, 00:18:00.704 "num_base_bdevs_operational": 3, 00:18:00.704 "base_bdevs_list": [ 00:18:00.704 { 00:18:00.704 "name": null, 00:18:00.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.704 "is_configured": false, 00:18:00.704 "data_offset": 0, 00:18:00.704 "data_size": 63488 00:18:00.704 }, 00:18:00.704 { 00:18:00.704 "name": "BaseBdev2", 00:18:00.704 "uuid": "160cd482-a4b6-55cb-9d85-63f007c7ce35", 00:18:00.704 "is_configured": true, 00:18:00.704 "data_offset": 2048, 00:18:00.704 "data_size": 63488 00:18:00.704 }, 00:18:00.704 { 00:18:00.704 "name": "BaseBdev3", 00:18:00.704 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:00.704 "is_configured": true, 00:18:00.704 "data_offset": 2048, 00:18:00.704 "data_size": 63488 00:18:00.704 }, 00:18:00.704 { 00:18:00.704 "name": "BaseBdev4", 00:18:00.704 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:00.704 "is_configured": true, 00:18:00.704 "data_offset": 2048, 00:18:00.704 "data_size": 63488 00:18:00.704 } 00:18:00.704 ] 00:18:00.704 }' 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.704 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.274 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:01.274 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.274 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.274 [2024-11-20 11:30:08.944845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.274 [2024-11-20 11:30:08.959096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:18:01.274 11:30:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.274 11:30:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:01.274 [2024-11-20 11:30:08.961823] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.211 11:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.211 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.211 "name": "raid_bdev1", 00:18:02.211 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:02.211 "strip_size_kb": 0, 00:18:02.211 "state": "online", 00:18:02.211 "raid_level": "raid1", 00:18:02.211 "superblock": true, 00:18:02.211 "num_base_bdevs": 4, 00:18:02.211 "num_base_bdevs_discovered": 4, 00:18:02.211 "num_base_bdevs_operational": 4, 00:18:02.211 "process": { 00:18:02.211 "type": "rebuild", 00:18:02.211 "target": "spare", 00:18:02.211 "progress": { 00:18:02.211 "blocks": 20480, 00:18:02.211 "percent": 32 00:18:02.211 } 00:18:02.211 }, 00:18:02.211 "base_bdevs_list": [ 00:18:02.211 { 00:18:02.211 "name": "spare", 00:18:02.211 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:02.211 "is_configured": true, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 }, 00:18:02.211 { 00:18:02.211 "name": "BaseBdev2", 00:18:02.211 "uuid": "160cd482-a4b6-55cb-9d85-63f007c7ce35", 00:18:02.211 "is_configured": true, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 }, 00:18:02.211 { 00:18:02.211 "name": "BaseBdev3", 00:18:02.211 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:02.211 "is_configured": true, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 }, 00:18:02.211 { 00:18:02.211 "name": "BaseBdev4", 00:18:02.211 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:02.211 "is_configured": true, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 } 00:18:02.211 ] 00:18:02.211 }' 00:18:02.211 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.469 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.469 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.469 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.469 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.470 [2024-11-20 11:30:10.127416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.470 [2024-11-20 11:30:10.170710] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.470 [2024-11-20 11:30:10.171105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.470 [2024-11-20 11:30:10.171138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.470 [2024-11-20 11:30:10.171155] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.470 "name": "raid_bdev1", 00:18:02.470 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:02.470 "strip_size_kb": 0, 00:18:02.470 "state": "online", 00:18:02.470 "raid_level": "raid1", 00:18:02.470 "superblock": true, 00:18:02.470 "num_base_bdevs": 4, 00:18:02.470 "num_base_bdevs_discovered": 3, 00:18:02.470 "num_base_bdevs_operational": 3, 00:18:02.470 "base_bdevs_list": [ 00:18:02.470 { 00:18:02.470 "name": null, 00:18:02.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.470 "is_configured": false, 00:18:02.470 "data_offset": 0, 00:18:02.470 "data_size": 63488 00:18:02.470 }, 00:18:02.470 { 00:18:02.470 "name": "BaseBdev2", 00:18:02.470 "uuid": "160cd482-a4b6-55cb-9d85-63f007c7ce35", 00:18:02.470 "is_configured": true, 00:18:02.470 "data_offset": 2048, 00:18:02.470 "data_size": 63488 00:18:02.470 }, 00:18:02.470 { 00:18:02.470 "name": "BaseBdev3", 00:18:02.470 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:02.470 "is_configured": true, 00:18:02.470 "data_offset": 2048, 00:18:02.470 "data_size": 63488 00:18:02.470 }, 00:18:02.470 { 00:18:02.470 "name": "BaseBdev4", 00:18:02.470 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:02.470 "is_configured": true, 00:18:02.470 "data_offset": 2048, 00:18:02.470 "data_size": 63488 00:18:02.470 } 00:18:02.470 ] 00:18:02.470 }' 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.470 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.038 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.038 "name": "raid_bdev1", 00:18:03.039 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:03.039 "strip_size_kb": 0, 00:18:03.039 "state": "online", 00:18:03.039 "raid_level": "raid1", 00:18:03.039 "superblock": true, 00:18:03.039 "num_base_bdevs": 4, 00:18:03.039 "num_base_bdevs_discovered": 3, 00:18:03.039 "num_base_bdevs_operational": 3, 00:18:03.039 "base_bdevs_list": [ 00:18:03.039 { 00:18:03.039 "name": null, 00:18:03.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.039 "is_configured": false, 00:18:03.039 "data_offset": 0, 00:18:03.039 "data_size": 63488 00:18:03.039 }, 00:18:03.039 { 00:18:03.039 "name": "BaseBdev2", 00:18:03.039 "uuid": "160cd482-a4b6-55cb-9d85-63f007c7ce35", 00:18:03.039 "is_configured": true, 00:18:03.039 "data_offset": 2048, 00:18:03.039 "data_size": 63488 00:18:03.039 }, 00:18:03.039 { 00:18:03.039 "name": "BaseBdev3", 00:18:03.039 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:03.039 "is_configured": true, 00:18:03.039 "data_offset": 2048, 00:18:03.039 "data_size": 63488 00:18:03.039 }, 00:18:03.039 { 00:18:03.039 "name": "BaseBdev4", 00:18:03.039 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:03.039 "is_configured": true, 00:18:03.039 "data_offset": 2048, 00:18:03.039 "data_size": 63488 00:18:03.039 } 00:18:03.039 ] 00:18:03.039 }' 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.039 [2024-11-20 11:30:10.855328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.039 [2024-11-20 11:30:10.869125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.039 11:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:03.039 [2024-11-20 11:30:10.871892] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.423 "name": "raid_bdev1", 00:18:04.423 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:04.423 "strip_size_kb": 0, 00:18:04.423 "state": "online", 00:18:04.423 "raid_level": "raid1", 00:18:04.423 "superblock": true, 00:18:04.423 "num_base_bdevs": 4, 00:18:04.423 "num_base_bdevs_discovered": 4, 00:18:04.423 "num_base_bdevs_operational": 4, 00:18:04.423 "process": { 00:18:04.423 "type": "rebuild", 00:18:04.423 "target": "spare", 00:18:04.423 "progress": { 00:18:04.423 "blocks": 20480, 00:18:04.423 "percent": 32 00:18:04.423 } 00:18:04.423 }, 00:18:04.423 "base_bdevs_list": [ 00:18:04.423 { 00:18:04.423 "name": "spare", 00:18:04.423 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:04.423 "is_configured": true, 00:18:04.423 "data_offset": 2048, 00:18:04.423 "data_size": 63488 00:18:04.423 }, 00:18:04.423 { 00:18:04.423 "name": "BaseBdev2", 00:18:04.423 "uuid": "160cd482-a4b6-55cb-9d85-63f007c7ce35", 00:18:04.423 "is_configured": true, 00:18:04.423 "data_offset": 2048, 00:18:04.423 "data_size": 63488 00:18:04.423 }, 00:18:04.423 { 00:18:04.423 "name": "BaseBdev3", 00:18:04.423 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:04.423 "is_configured": true, 00:18:04.423 "data_offset": 2048, 00:18:04.423 "data_size": 63488 00:18:04.423 }, 00:18:04.423 { 00:18:04.423 "name": "BaseBdev4", 00:18:04.423 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:04.423 "is_configured": true, 00:18:04.423 "data_offset": 2048, 00:18:04.423 "data_size": 63488 00:18:04.423 } 00:18:04.423 ] 00:18:04.423 }' 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.423 11:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:04.423 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.423 [2024-11-20 11:30:12.040951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:04.423 [2024-11-20 11:30:12.181091] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.423 "name": "raid_bdev1", 00:18:04.423 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:04.423 "strip_size_kb": 0, 00:18:04.423 "state": "online", 00:18:04.423 "raid_level": "raid1", 00:18:04.423 "superblock": true, 00:18:04.423 "num_base_bdevs": 4, 00:18:04.423 "num_base_bdevs_discovered": 3, 00:18:04.423 "num_base_bdevs_operational": 3, 00:18:04.423 "process": { 00:18:04.423 "type": "rebuild", 00:18:04.423 "target": "spare", 00:18:04.423 "progress": { 00:18:04.423 "blocks": 24576, 00:18:04.423 "percent": 38 00:18:04.423 } 00:18:04.423 }, 00:18:04.423 "base_bdevs_list": [ 00:18:04.423 { 00:18:04.423 "name": "spare", 00:18:04.423 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:04.423 "is_configured": true, 00:18:04.423 "data_offset": 2048, 00:18:04.423 "data_size": 63488 00:18:04.423 }, 00:18:04.423 { 00:18:04.423 "name": null, 00:18:04.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.423 "is_configured": false, 00:18:04.423 "data_offset": 0, 00:18:04.423 "data_size": 63488 00:18:04.423 }, 00:18:04.423 { 00:18:04.423 "name": "BaseBdev3", 00:18:04.423 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:04.423 "is_configured": true, 00:18:04.423 "data_offset": 2048, 00:18:04.423 "data_size": 63488 00:18:04.423 }, 00:18:04.423 { 00:18:04.423 "name": "BaseBdev4", 00:18:04.423 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:04.423 "is_configured": true, 00:18:04.423 "data_offset": 2048, 00:18:04.423 "data_size": 63488 00:18:04.423 } 00:18:04.423 ] 00:18:04.423 }' 00:18:04.423 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=502 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.692 "name": "raid_bdev1", 00:18:04.692 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:04.692 "strip_size_kb": 0, 00:18:04.692 "state": "online", 00:18:04.692 "raid_level": "raid1", 00:18:04.692 "superblock": true, 00:18:04.692 "num_base_bdevs": 4, 00:18:04.692 "num_base_bdevs_discovered": 3, 00:18:04.692 "num_base_bdevs_operational": 3, 00:18:04.692 "process": { 00:18:04.692 "type": "rebuild", 00:18:04.692 "target": "spare", 00:18:04.692 "progress": { 00:18:04.692 "blocks": 26624, 00:18:04.692 "percent": 41 00:18:04.692 } 00:18:04.692 }, 00:18:04.692 "base_bdevs_list": [ 00:18:04.692 { 00:18:04.692 "name": "spare", 00:18:04.692 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:04.692 "is_configured": true, 00:18:04.692 "data_offset": 2048, 00:18:04.692 "data_size": 63488 00:18:04.692 }, 00:18:04.692 { 00:18:04.692 "name": null, 00:18:04.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.692 "is_configured": false, 00:18:04.692 "data_offset": 0, 00:18:04.692 "data_size": 63488 00:18:04.692 }, 00:18:04.692 { 00:18:04.692 "name": "BaseBdev3", 00:18:04.692 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:04.692 "is_configured": true, 00:18:04.692 "data_offset": 2048, 00:18:04.692 "data_size": 63488 00:18:04.692 }, 00:18:04.692 { 00:18:04.692 "name": "BaseBdev4", 00:18:04.692 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:04.692 "is_configured": true, 00:18:04.692 "data_offset": 2048, 00:18:04.692 "data_size": 63488 00:18:04.692 } 00:18:04.692 ] 00:18:04.692 }' 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.692 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.693 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.693 11:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.069 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.069 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.069 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.069 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.070 "name": "raid_bdev1", 00:18:06.070 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:06.070 "strip_size_kb": 0, 00:18:06.070 "state": "online", 00:18:06.070 "raid_level": "raid1", 00:18:06.070 "superblock": true, 00:18:06.070 "num_base_bdevs": 4, 00:18:06.070 "num_base_bdevs_discovered": 3, 00:18:06.070 "num_base_bdevs_operational": 3, 00:18:06.070 "process": { 00:18:06.070 "type": "rebuild", 00:18:06.070 "target": "spare", 00:18:06.070 "progress": { 00:18:06.070 "blocks": 49152, 00:18:06.070 "percent": 77 00:18:06.070 } 00:18:06.070 }, 00:18:06.070 "base_bdevs_list": [ 00:18:06.070 { 00:18:06.070 "name": "spare", 00:18:06.070 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:06.070 "is_configured": true, 00:18:06.070 "data_offset": 2048, 00:18:06.070 "data_size": 63488 00:18:06.070 }, 00:18:06.070 { 00:18:06.070 "name": null, 00:18:06.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.070 "is_configured": false, 00:18:06.070 "data_offset": 0, 00:18:06.070 "data_size": 63488 00:18:06.070 }, 00:18:06.070 { 00:18:06.070 "name": "BaseBdev3", 00:18:06.070 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:06.070 "is_configured": true, 00:18:06.070 "data_offset": 2048, 00:18:06.070 "data_size": 63488 00:18:06.070 }, 00:18:06.070 { 00:18:06.070 "name": "BaseBdev4", 00:18:06.070 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:06.070 "is_configured": true, 00:18:06.070 "data_offset": 2048, 00:18:06.070 "data_size": 63488 00:18:06.070 } 00:18:06.070 ] 00:18:06.070 }' 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.070 11:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:06.328 [2024-11-20 11:30:14.095566] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:06.328 [2024-11-20 11:30:14.095700] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:06.328 [2024-11-20 11:30:14.095898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.894 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.895 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.895 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.895 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.895 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.895 "name": "raid_bdev1", 00:18:06.895 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:06.895 "strip_size_kb": 0, 00:18:06.895 "state": "online", 00:18:06.895 "raid_level": "raid1", 00:18:06.895 "superblock": true, 00:18:06.895 "num_base_bdevs": 4, 00:18:06.895 "num_base_bdevs_discovered": 3, 00:18:06.895 "num_base_bdevs_operational": 3, 00:18:06.895 "base_bdevs_list": [ 00:18:06.895 { 00:18:06.895 "name": "spare", 00:18:06.895 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:06.895 "is_configured": true, 00:18:06.895 "data_offset": 2048, 00:18:06.895 "data_size": 63488 00:18:06.895 }, 00:18:06.895 { 00:18:06.895 "name": null, 00:18:06.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.895 "is_configured": false, 00:18:06.895 "data_offset": 0, 00:18:06.895 "data_size": 63488 00:18:06.895 }, 00:18:06.895 { 00:18:06.895 "name": "BaseBdev3", 00:18:06.895 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:06.895 "is_configured": true, 00:18:06.895 "data_offset": 2048, 00:18:06.895 "data_size": 63488 00:18:06.895 }, 00:18:06.895 { 00:18:06.895 "name": "BaseBdev4", 00:18:06.895 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:06.895 "is_configured": true, 00:18:06.895 "data_offset": 2048, 00:18:06.895 "data_size": 63488 00:18:06.895 } 00:18:06.895 ] 00:18:06.895 }' 00:18:06.895 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.153 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.153 "name": "raid_bdev1", 00:18:07.153 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:07.153 "strip_size_kb": 0, 00:18:07.153 "state": "online", 00:18:07.153 "raid_level": "raid1", 00:18:07.153 "superblock": true, 00:18:07.153 "num_base_bdevs": 4, 00:18:07.153 "num_base_bdevs_discovered": 3, 00:18:07.153 "num_base_bdevs_operational": 3, 00:18:07.153 "base_bdevs_list": [ 00:18:07.153 { 00:18:07.153 "name": "spare", 00:18:07.153 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:07.153 "is_configured": true, 00:18:07.153 "data_offset": 2048, 00:18:07.153 "data_size": 63488 00:18:07.153 }, 00:18:07.153 { 00:18:07.153 "name": null, 00:18:07.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.153 "is_configured": false, 00:18:07.153 "data_offset": 0, 00:18:07.153 "data_size": 63488 00:18:07.153 }, 00:18:07.153 { 00:18:07.153 "name": "BaseBdev3", 00:18:07.153 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:07.153 "is_configured": true, 00:18:07.153 "data_offset": 2048, 00:18:07.153 "data_size": 63488 00:18:07.154 }, 00:18:07.154 { 00:18:07.154 "name": "BaseBdev4", 00:18:07.154 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:07.154 "is_configured": true, 00:18:07.154 "data_offset": 2048, 00:18:07.154 "data_size": 63488 00:18:07.154 } 00:18:07.154 ] 00:18:07.154 }' 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.154 "name": "raid_bdev1", 00:18:07.154 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:07.154 "strip_size_kb": 0, 00:18:07.154 "state": "online", 00:18:07.154 "raid_level": "raid1", 00:18:07.154 "superblock": true, 00:18:07.154 "num_base_bdevs": 4, 00:18:07.154 "num_base_bdevs_discovered": 3, 00:18:07.154 "num_base_bdevs_operational": 3, 00:18:07.154 "base_bdevs_list": [ 00:18:07.154 { 00:18:07.154 "name": "spare", 00:18:07.154 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:07.154 "is_configured": true, 00:18:07.154 "data_offset": 2048, 00:18:07.154 "data_size": 63488 00:18:07.154 }, 00:18:07.154 { 00:18:07.154 "name": null, 00:18:07.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.154 "is_configured": false, 00:18:07.154 "data_offset": 0, 00:18:07.154 "data_size": 63488 00:18:07.154 }, 00:18:07.154 { 00:18:07.154 "name": "BaseBdev3", 00:18:07.154 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:07.154 "is_configured": true, 00:18:07.154 "data_offset": 2048, 00:18:07.154 "data_size": 63488 00:18:07.154 }, 00:18:07.154 { 00:18:07.154 "name": "BaseBdev4", 00:18:07.154 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:07.154 "is_configured": true, 00:18:07.154 "data_offset": 2048, 00:18:07.154 "data_size": 63488 00:18:07.154 } 00:18:07.154 ] 00:18:07.154 }' 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.154 11:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.722 [2024-11-20 11:30:15.435927] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.722 [2024-11-20 11:30:15.436098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.722 [2024-11-20 11:30:15.436314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.722 [2024-11-20 11:30:15.436563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.722 [2024-11-20 11:30:15.436595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:07.722 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:08.007 /dev/nbd0 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.007 1+0 records in 00:18:08.007 1+0 records out 00:18:08.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463782 s, 8.8 MB/s 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:08.007 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.266 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.266 11:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:08.266 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.266 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.266 11:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:08.526 /dev/nbd1 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.526 1+0 records in 00:18:08.526 1+0 records out 00:18:08.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403147 s, 10.2 MB/s 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:08.526 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.094 11:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.354 [2024-11-20 11:30:17.059590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.354 [2024-11-20 11:30:17.059678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.354 [2024-11-20 11:30:17.059714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:09.354 [2024-11-20 11:30:17.059730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.354 [2024-11-20 11:30:17.062723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.354 [2024-11-20 11:30:17.062776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.354 [2024-11-20 11:30:17.062909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:09.354 [2024-11-20 11:30:17.062973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.354 [2024-11-20 11:30:17.063160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:09.354 [2024-11-20 11:30:17.063291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:09.354 spare 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.354 [2024-11-20 11:30:17.163438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:09.354 [2024-11-20 11:30:17.163509] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:09.354 [2024-11-20 11:30:17.163977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:09.354 [2024-11-20 11:30:17.164239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:09.354 [2024-11-20 11:30:17.164271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:09.354 [2024-11-20 11:30:17.164521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.354 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.613 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.613 "name": "raid_bdev1", 00:18:09.613 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:09.613 "strip_size_kb": 0, 00:18:09.613 "state": "online", 00:18:09.613 "raid_level": "raid1", 00:18:09.613 "superblock": true, 00:18:09.613 "num_base_bdevs": 4, 00:18:09.613 "num_base_bdevs_discovered": 3, 00:18:09.613 "num_base_bdevs_operational": 3, 00:18:09.613 "base_bdevs_list": [ 00:18:09.613 { 00:18:09.613 "name": "spare", 00:18:09.613 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:09.613 "is_configured": true, 00:18:09.613 "data_offset": 2048, 00:18:09.613 "data_size": 63488 00:18:09.613 }, 00:18:09.613 { 00:18:09.613 "name": null, 00:18:09.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.613 "is_configured": false, 00:18:09.613 "data_offset": 2048, 00:18:09.613 "data_size": 63488 00:18:09.613 }, 00:18:09.613 { 00:18:09.613 "name": "BaseBdev3", 00:18:09.613 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:09.613 "is_configured": true, 00:18:09.613 "data_offset": 2048, 00:18:09.613 "data_size": 63488 00:18:09.613 }, 00:18:09.613 { 00:18:09.613 "name": "BaseBdev4", 00:18:09.613 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:09.613 "is_configured": true, 00:18:09.613 "data_offset": 2048, 00:18:09.613 "data_size": 63488 00:18:09.613 } 00:18:09.613 ] 00:18:09.613 }' 00:18:09.613 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.613 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.872 "name": "raid_bdev1", 00:18:09.872 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:09.872 "strip_size_kb": 0, 00:18:09.872 "state": "online", 00:18:09.872 "raid_level": "raid1", 00:18:09.872 "superblock": true, 00:18:09.872 "num_base_bdevs": 4, 00:18:09.872 "num_base_bdevs_discovered": 3, 00:18:09.872 "num_base_bdevs_operational": 3, 00:18:09.872 "base_bdevs_list": [ 00:18:09.872 { 00:18:09.872 "name": "spare", 00:18:09.872 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:09.872 "is_configured": true, 00:18:09.872 "data_offset": 2048, 00:18:09.872 "data_size": 63488 00:18:09.872 }, 00:18:09.872 { 00:18:09.872 "name": null, 00:18:09.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.872 "is_configured": false, 00:18:09.872 "data_offset": 2048, 00:18:09.872 "data_size": 63488 00:18:09.872 }, 00:18:09.872 { 00:18:09.872 "name": "BaseBdev3", 00:18:09.872 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:09.872 "is_configured": true, 00:18:09.872 "data_offset": 2048, 00:18:09.872 "data_size": 63488 00:18:09.872 }, 00:18:09.872 { 00:18:09.872 "name": "BaseBdev4", 00:18:09.872 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:09.872 "is_configured": true, 00:18:09.872 "data_offset": 2048, 00:18:09.872 "data_size": 63488 00:18:09.872 } 00:18:09.872 ] 00:18:09.872 }' 00:18:09.872 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.131 [2024-11-20 11:30:17.844714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.131 "name": "raid_bdev1", 00:18:10.131 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:10.131 "strip_size_kb": 0, 00:18:10.131 "state": "online", 00:18:10.131 "raid_level": "raid1", 00:18:10.131 "superblock": true, 00:18:10.131 "num_base_bdevs": 4, 00:18:10.131 "num_base_bdevs_discovered": 2, 00:18:10.131 "num_base_bdevs_operational": 2, 00:18:10.131 "base_bdevs_list": [ 00:18:10.131 { 00:18:10.131 "name": null, 00:18:10.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.131 "is_configured": false, 00:18:10.131 "data_offset": 0, 00:18:10.131 "data_size": 63488 00:18:10.131 }, 00:18:10.131 { 00:18:10.131 "name": null, 00:18:10.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.131 "is_configured": false, 00:18:10.131 "data_offset": 2048, 00:18:10.131 "data_size": 63488 00:18:10.131 }, 00:18:10.131 { 00:18:10.131 "name": "BaseBdev3", 00:18:10.131 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:10.131 "is_configured": true, 00:18:10.131 "data_offset": 2048, 00:18:10.131 "data_size": 63488 00:18:10.131 }, 00:18:10.131 { 00:18:10.131 "name": "BaseBdev4", 00:18:10.131 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:10.131 "is_configured": true, 00:18:10.131 "data_offset": 2048, 00:18:10.131 "data_size": 63488 00:18:10.131 } 00:18:10.131 ] 00:18:10.131 }' 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.131 11:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.700 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.700 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.700 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.700 [2024-11-20 11:30:18.368827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.700 [2024-11-20 11:30:18.369061] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:10.700 [2024-11-20 11:30:18.369082] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:10.700 [2024-11-20 11:30:18.369132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.700 [2024-11-20 11:30:18.382372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:18:10.700 11:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.700 11:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:10.700 [2024-11-20 11:30:18.384933] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.636 "name": "raid_bdev1", 00:18:11.636 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:11.636 "strip_size_kb": 0, 00:18:11.636 "state": "online", 00:18:11.636 "raid_level": "raid1", 00:18:11.636 "superblock": true, 00:18:11.636 "num_base_bdevs": 4, 00:18:11.636 "num_base_bdevs_discovered": 3, 00:18:11.636 "num_base_bdevs_operational": 3, 00:18:11.636 "process": { 00:18:11.636 "type": "rebuild", 00:18:11.636 "target": "spare", 00:18:11.636 "progress": { 00:18:11.636 "blocks": 20480, 00:18:11.636 "percent": 32 00:18:11.636 } 00:18:11.636 }, 00:18:11.636 "base_bdevs_list": [ 00:18:11.636 { 00:18:11.636 "name": "spare", 00:18:11.636 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:11.636 "is_configured": true, 00:18:11.636 "data_offset": 2048, 00:18:11.636 "data_size": 63488 00:18:11.636 }, 00:18:11.636 { 00:18:11.636 "name": null, 00:18:11.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.636 "is_configured": false, 00:18:11.636 "data_offset": 2048, 00:18:11.636 "data_size": 63488 00:18:11.636 }, 00:18:11.636 { 00:18:11.636 "name": "BaseBdev3", 00:18:11.636 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:11.636 "is_configured": true, 00:18:11.636 "data_offset": 2048, 00:18:11.636 "data_size": 63488 00:18:11.636 }, 00:18:11.636 { 00:18:11.636 "name": "BaseBdev4", 00:18:11.636 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:11.636 "is_configured": true, 00:18:11.636 "data_offset": 2048, 00:18:11.636 "data_size": 63488 00:18:11.636 } 00:18:11.636 ] 00:18:11.636 }' 00:18:11.636 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.894 [2024-11-20 11:30:19.582032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.894 [2024-11-20 11:30:19.593906] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.894 [2024-11-20 11:30:19.594001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.894 [2024-11-20 11:30:19.594030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.894 [2024-11-20 11:30:19.594041] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.894 "name": "raid_bdev1", 00:18:11.894 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:11.894 "strip_size_kb": 0, 00:18:11.894 "state": "online", 00:18:11.894 "raid_level": "raid1", 00:18:11.894 "superblock": true, 00:18:11.894 "num_base_bdevs": 4, 00:18:11.894 "num_base_bdevs_discovered": 2, 00:18:11.894 "num_base_bdevs_operational": 2, 00:18:11.894 "base_bdevs_list": [ 00:18:11.894 { 00:18:11.894 "name": null, 00:18:11.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.894 "is_configured": false, 00:18:11.894 "data_offset": 0, 00:18:11.894 "data_size": 63488 00:18:11.894 }, 00:18:11.894 { 00:18:11.894 "name": null, 00:18:11.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.894 "is_configured": false, 00:18:11.894 "data_offset": 2048, 00:18:11.894 "data_size": 63488 00:18:11.894 }, 00:18:11.894 { 00:18:11.894 "name": "BaseBdev3", 00:18:11.894 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:11.894 "is_configured": true, 00:18:11.894 "data_offset": 2048, 00:18:11.894 "data_size": 63488 00:18:11.894 }, 00:18:11.894 { 00:18:11.894 "name": "BaseBdev4", 00:18:11.894 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:11.894 "is_configured": true, 00:18:11.894 "data_offset": 2048, 00:18:11.894 "data_size": 63488 00:18:11.894 } 00:18:11.894 ] 00:18:11.894 }' 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.894 11:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.504 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.504 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.504 [2024-11-20 11:30:20.154075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.504 [2024-11-20 11:30:20.154167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.504 [2024-11-20 11:30:20.154210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:12.504 [2024-11-20 11:30:20.154226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.504 [2024-11-20 11:30:20.154852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.504 [2024-11-20 11:30:20.154884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.504 [2024-11-20 11:30:20.155008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:12.504 [2024-11-20 11:30:20.155028] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:12.504 [2024-11-20 11:30:20.155051] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:12.504 [2024-11-20 11:30:20.155086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.504 [2024-11-20 11:30:20.168980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:18:12.504 spare 00:18:12.504 11:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.504 11:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:12.504 [2024-11-20 11:30:20.171517] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.440 "name": "raid_bdev1", 00:18:13.440 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:13.440 "strip_size_kb": 0, 00:18:13.440 "state": "online", 00:18:13.440 "raid_level": "raid1", 00:18:13.440 "superblock": true, 00:18:13.440 "num_base_bdevs": 4, 00:18:13.440 "num_base_bdevs_discovered": 3, 00:18:13.440 "num_base_bdevs_operational": 3, 00:18:13.440 "process": { 00:18:13.440 "type": "rebuild", 00:18:13.440 "target": "spare", 00:18:13.440 "progress": { 00:18:13.440 "blocks": 20480, 00:18:13.440 "percent": 32 00:18:13.440 } 00:18:13.440 }, 00:18:13.440 "base_bdevs_list": [ 00:18:13.440 { 00:18:13.440 "name": "spare", 00:18:13.440 "uuid": "488585e7-32cd-57c2-9d08-4a4eb0123012", 00:18:13.440 "is_configured": true, 00:18:13.440 "data_offset": 2048, 00:18:13.440 "data_size": 63488 00:18:13.440 }, 00:18:13.440 { 00:18:13.440 "name": null, 00:18:13.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.440 "is_configured": false, 00:18:13.440 "data_offset": 2048, 00:18:13.440 "data_size": 63488 00:18:13.440 }, 00:18:13.440 { 00:18:13.440 "name": "BaseBdev3", 00:18:13.440 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:13.440 "is_configured": true, 00:18:13.440 "data_offset": 2048, 00:18:13.440 "data_size": 63488 00:18:13.440 }, 00:18:13.440 { 00:18:13.440 "name": "BaseBdev4", 00:18:13.440 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:13.440 "is_configured": true, 00:18:13.440 "data_offset": 2048, 00:18:13.440 "data_size": 63488 00:18:13.440 } 00:18:13.440 ] 00:18:13.440 }' 00:18:13.440 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.698 [2024-11-20 11:30:21.356656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.698 [2024-11-20 11:30:21.380608] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:13.698 [2024-11-20 11:30:21.380708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.698 [2024-11-20 11:30:21.380734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.698 [2024-11-20 11:30:21.380748] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.698 "name": "raid_bdev1", 00:18:13.698 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:13.698 "strip_size_kb": 0, 00:18:13.698 "state": "online", 00:18:13.698 "raid_level": "raid1", 00:18:13.698 "superblock": true, 00:18:13.698 "num_base_bdevs": 4, 00:18:13.698 "num_base_bdevs_discovered": 2, 00:18:13.698 "num_base_bdevs_operational": 2, 00:18:13.698 "base_bdevs_list": [ 00:18:13.698 { 00:18:13.698 "name": null, 00:18:13.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.698 "is_configured": false, 00:18:13.698 "data_offset": 0, 00:18:13.698 "data_size": 63488 00:18:13.698 }, 00:18:13.698 { 00:18:13.698 "name": null, 00:18:13.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.698 "is_configured": false, 00:18:13.698 "data_offset": 2048, 00:18:13.698 "data_size": 63488 00:18:13.698 }, 00:18:13.698 { 00:18:13.698 "name": "BaseBdev3", 00:18:13.698 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:13.698 "is_configured": true, 00:18:13.698 "data_offset": 2048, 00:18:13.698 "data_size": 63488 00:18:13.698 }, 00:18:13.698 { 00:18:13.698 "name": "BaseBdev4", 00:18:13.698 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:13.698 "is_configured": true, 00:18:13.698 "data_offset": 2048, 00:18:13.698 "data_size": 63488 00:18:13.698 } 00:18:13.698 ] 00:18:13.698 }' 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.698 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.264 "name": "raid_bdev1", 00:18:14.264 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:14.264 "strip_size_kb": 0, 00:18:14.264 "state": "online", 00:18:14.264 "raid_level": "raid1", 00:18:14.264 "superblock": true, 00:18:14.264 "num_base_bdevs": 4, 00:18:14.264 "num_base_bdevs_discovered": 2, 00:18:14.264 "num_base_bdevs_operational": 2, 00:18:14.264 "base_bdevs_list": [ 00:18:14.264 { 00:18:14.264 "name": null, 00:18:14.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.264 "is_configured": false, 00:18:14.264 "data_offset": 0, 00:18:14.264 "data_size": 63488 00:18:14.264 }, 00:18:14.264 { 00:18:14.264 "name": null, 00:18:14.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.264 "is_configured": false, 00:18:14.264 "data_offset": 2048, 00:18:14.264 "data_size": 63488 00:18:14.264 }, 00:18:14.264 { 00:18:14.264 "name": "BaseBdev3", 00:18:14.264 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:14.264 "is_configured": true, 00:18:14.264 "data_offset": 2048, 00:18:14.264 "data_size": 63488 00:18:14.264 }, 00:18:14.264 { 00:18:14.264 "name": "BaseBdev4", 00:18:14.264 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:14.264 "is_configured": true, 00:18:14.264 "data_offset": 2048, 00:18:14.264 "data_size": 63488 00:18:14.264 } 00:18:14.264 ] 00:18:14.264 }' 00:18:14.264 11:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.264 [2024-11-20 11:30:22.100886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:14.264 [2024-11-20 11:30:22.100994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.264 [2024-11-20 11:30:22.101027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:14.264 [2024-11-20 11:30:22.101045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.264 [2024-11-20 11:30:22.101608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.264 [2024-11-20 11:30:22.101659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:14.264 [2024-11-20 11:30:22.101765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:14.264 [2024-11-20 11:30:22.101790] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:14.264 [2024-11-20 11:30:22.101802] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:14.264 [2024-11-20 11:30:22.101832] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:14.264 BaseBdev1 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.264 11:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:15.635 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.635 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.636 "name": "raid_bdev1", 00:18:15.636 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:15.636 "strip_size_kb": 0, 00:18:15.636 "state": "online", 00:18:15.636 "raid_level": "raid1", 00:18:15.636 "superblock": true, 00:18:15.636 "num_base_bdevs": 4, 00:18:15.636 "num_base_bdevs_discovered": 2, 00:18:15.636 "num_base_bdevs_operational": 2, 00:18:15.636 "base_bdevs_list": [ 00:18:15.636 { 00:18:15.636 "name": null, 00:18:15.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.636 "is_configured": false, 00:18:15.636 "data_offset": 0, 00:18:15.636 "data_size": 63488 00:18:15.636 }, 00:18:15.636 { 00:18:15.636 "name": null, 00:18:15.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.636 "is_configured": false, 00:18:15.636 "data_offset": 2048, 00:18:15.636 "data_size": 63488 00:18:15.636 }, 00:18:15.636 { 00:18:15.636 "name": "BaseBdev3", 00:18:15.636 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:15.636 "is_configured": true, 00:18:15.636 "data_offset": 2048, 00:18:15.636 "data_size": 63488 00:18:15.636 }, 00:18:15.636 { 00:18:15.636 "name": "BaseBdev4", 00:18:15.636 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:15.636 "is_configured": true, 00:18:15.636 "data_offset": 2048, 00:18:15.636 "data_size": 63488 00:18:15.636 } 00:18:15.636 ] 00:18:15.636 }' 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.636 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.894 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.895 "name": "raid_bdev1", 00:18:15.895 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:15.895 "strip_size_kb": 0, 00:18:15.895 "state": "online", 00:18:15.895 "raid_level": "raid1", 00:18:15.895 "superblock": true, 00:18:15.895 "num_base_bdevs": 4, 00:18:15.895 "num_base_bdevs_discovered": 2, 00:18:15.895 "num_base_bdevs_operational": 2, 00:18:15.895 "base_bdevs_list": [ 00:18:15.895 { 00:18:15.895 "name": null, 00:18:15.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.895 "is_configured": false, 00:18:15.895 "data_offset": 0, 00:18:15.895 "data_size": 63488 00:18:15.895 }, 00:18:15.895 { 00:18:15.895 "name": null, 00:18:15.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.895 "is_configured": false, 00:18:15.895 "data_offset": 2048, 00:18:15.895 "data_size": 63488 00:18:15.895 }, 00:18:15.895 { 00:18:15.895 "name": "BaseBdev3", 00:18:15.895 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:15.895 "is_configured": true, 00:18:15.895 "data_offset": 2048, 00:18:15.895 "data_size": 63488 00:18:15.895 }, 00:18:15.895 { 00:18:15.895 "name": "BaseBdev4", 00:18:15.895 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:15.895 "is_configured": true, 00:18:15.895 "data_offset": 2048, 00:18:15.895 "data_size": 63488 00:18:15.895 } 00:18:15.895 ] 00:18:15.895 }' 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.895 [2024-11-20 11:30:23.725352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.895 [2024-11-20 11:30:23.725609] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:15.895 [2024-11-20 11:30:23.725646] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:15.895 request: 00:18:15.895 { 00:18:15.895 "base_bdev": "BaseBdev1", 00:18:15.895 "raid_bdev": "raid_bdev1", 00:18:15.895 "method": "bdev_raid_add_base_bdev", 00:18:15.895 "req_id": 1 00:18:15.895 } 00:18:15.895 Got JSON-RPC error response 00:18:15.895 response: 00:18:15.895 { 00:18:15.895 "code": -22, 00:18:15.895 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:15.895 } 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.895 11:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.270 "name": "raid_bdev1", 00:18:17.270 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:17.270 "strip_size_kb": 0, 00:18:17.270 "state": "online", 00:18:17.270 "raid_level": "raid1", 00:18:17.270 "superblock": true, 00:18:17.270 "num_base_bdevs": 4, 00:18:17.270 "num_base_bdevs_discovered": 2, 00:18:17.270 "num_base_bdevs_operational": 2, 00:18:17.270 "base_bdevs_list": [ 00:18:17.270 { 00:18:17.270 "name": null, 00:18:17.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.270 "is_configured": false, 00:18:17.270 "data_offset": 0, 00:18:17.270 "data_size": 63488 00:18:17.270 }, 00:18:17.270 { 00:18:17.270 "name": null, 00:18:17.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.270 "is_configured": false, 00:18:17.270 "data_offset": 2048, 00:18:17.270 "data_size": 63488 00:18:17.270 }, 00:18:17.270 { 00:18:17.270 "name": "BaseBdev3", 00:18:17.270 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:17.270 "is_configured": true, 00:18:17.270 "data_offset": 2048, 00:18:17.270 "data_size": 63488 00:18:17.270 }, 00:18:17.270 { 00:18:17.270 "name": "BaseBdev4", 00:18:17.270 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:17.270 "is_configured": true, 00:18:17.270 "data_offset": 2048, 00:18:17.270 "data_size": 63488 00:18:17.270 } 00:18:17.270 ] 00:18:17.270 }' 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.270 11:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.529 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.529 "name": "raid_bdev1", 00:18:17.529 "uuid": "ae9e81d7-2795-4500-b660-6ea9ff4ab7bd", 00:18:17.530 "strip_size_kb": 0, 00:18:17.530 "state": "online", 00:18:17.530 "raid_level": "raid1", 00:18:17.530 "superblock": true, 00:18:17.530 "num_base_bdevs": 4, 00:18:17.530 "num_base_bdevs_discovered": 2, 00:18:17.530 "num_base_bdevs_operational": 2, 00:18:17.530 "base_bdevs_list": [ 00:18:17.530 { 00:18:17.530 "name": null, 00:18:17.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.530 "is_configured": false, 00:18:17.530 "data_offset": 0, 00:18:17.530 "data_size": 63488 00:18:17.530 }, 00:18:17.530 { 00:18:17.530 "name": null, 00:18:17.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.530 "is_configured": false, 00:18:17.530 "data_offset": 2048, 00:18:17.530 "data_size": 63488 00:18:17.530 }, 00:18:17.530 { 00:18:17.530 "name": "BaseBdev3", 00:18:17.530 "uuid": "8df5383f-0387-5198-93f1-cfa0b259d3fe", 00:18:17.530 "is_configured": true, 00:18:17.530 "data_offset": 2048, 00:18:17.530 "data_size": 63488 00:18:17.530 }, 00:18:17.530 { 00:18:17.530 "name": "BaseBdev4", 00:18:17.530 "uuid": "78587d17-6ac6-50e2-bd4b-c34e6227ff57", 00:18:17.530 "is_configured": true, 00:18:17.530 "data_offset": 2048, 00:18:17.530 "data_size": 63488 00:18:17.530 } 00:18:17.530 ] 00:18:17.530 }' 00:18:17.530 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78188 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78188 ']' 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78188 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78188 00:18:17.789 killing process with pid 78188 00:18:17.789 Received shutdown signal, test time was about 60.000000 seconds 00:18:17.789 00:18:17.789 Latency(us) 00:18:17.789 [2024-11-20T11:30:25.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.789 [2024-11-20T11:30:25.635Z] =================================================================================================================== 00:18:17.789 [2024-11-20T11:30:25.635Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78188' 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78188 00:18:17.789 [2024-11-20 11:30:25.472286] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.789 11:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78188 00:18:17.789 [2024-11-20 11:30:25.472449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.789 [2024-11-20 11:30:25.472541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.789 [2024-11-20 11:30:25.472557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:18.356 [2024-11-20 11:30:25.924254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.293 11:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:19.293 00:18:19.293 real 0m29.236s 00:18:19.293 user 0m35.598s 00:18:19.293 sys 0m4.005s 00:18:19.293 11:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.293 11:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.293 ************************************ 00:18:19.293 END TEST raid_rebuild_test_sb 00:18:19.293 ************************************ 00:18:19.293 11:30:26 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:18:19.293 11:30:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:19.293 11:30:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.293 11:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.293 ************************************ 00:18:19.293 START TEST raid_rebuild_test_io 00:18:19.293 ************************************ 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78975 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78975 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78975 ']' 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.293 11:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.293 [2024-11-20 11:30:27.134905] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:18:19.293 [2024-11-20 11:30:27.135065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78975 ] 00:18:19.293 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:19.293 Zero copy mechanism will not be used. 00:18:19.552 [2024-11-20 11:30:27.312218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.809 [2024-11-20 11:30:27.442803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.809 [2024-11-20 11:30:27.646021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.809 [2024-11-20 11:30:27.646113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.469 BaseBdev1_malloc 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.469 [2024-11-20 11:30:28.222024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:20.469 [2024-11-20 11:30:28.222131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.469 [2024-11-20 11:30:28.222163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:20.469 [2024-11-20 11:30:28.222181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.469 [2024-11-20 11:30:28.225171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.469 [2024-11-20 11:30:28.225223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.469 BaseBdev1 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.469 BaseBdev2_malloc 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.469 [2024-11-20 11:30:28.280893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:20.469 [2024-11-20 11:30:28.280981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.469 [2024-11-20 11:30:28.281011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:20.469 [2024-11-20 11:30:28.281031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.469 [2024-11-20 11:30:28.283939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.469 [2024-11-20 11:30:28.283991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:20.469 BaseBdev2 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.469 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 BaseBdev3_malloc 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 [2024-11-20 11:30:28.350795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:20.727 [2024-11-20 11:30:28.350896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.727 [2024-11-20 11:30:28.350929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:20.727 [2024-11-20 11:30:28.350947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.727 [2024-11-20 11:30:28.354376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.727 [2024-11-20 11:30:28.354453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:20.727 BaseBdev3 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 BaseBdev4_malloc 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 [2024-11-20 11:30:28.407694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:20.727 [2024-11-20 11:30:28.407803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.727 [2024-11-20 11:30:28.407857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:20.727 [2024-11-20 11:30:28.407876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.727 [2024-11-20 11:30:28.410896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.727 [2024-11-20 11:30:28.411099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:20.727 BaseBdev4 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 spare_malloc 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 spare_delay 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.727 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 [2024-11-20 11:30:28.472192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.727 [2024-11-20 11:30:28.472279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.727 [2024-11-20 11:30:28.472315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:20.728 [2024-11-20 11:30:28.472333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.728 [2024-11-20 11:30:28.475342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.728 [2024-11-20 11:30:28.475406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.728 spare 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.728 [2024-11-20 11:30:28.484394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.728 [2024-11-20 11:30:28.487064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.728 [2024-11-20 11:30:28.487169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.728 [2024-11-20 11:30:28.487253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:20.728 [2024-11-20 11:30:28.487383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:20.728 [2024-11-20 11:30:28.487408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:20.728 [2024-11-20 11:30:28.487820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:20.728 [2024-11-20 11:30:28.488064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:20.728 [2024-11-20 11:30:28.488093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:20.728 [2024-11-20 11:30:28.488386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.728 "name": "raid_bdev1", 00:18:20.728 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:20.728 "strip_size_kb": 0, 00:18:20.728 "state": "online", 00:18:20.728 "raid_level": "raid1", 00:18:20.728 "superblock": false, 00:18:20.728 "num_base_bdevs": 4, 00:18:20.728 "num_base_bdevs_discovered": 4, 00:18:20.728 "num_base_bdevs_operational": 4, 00:18:20.728 "base_bdevs_list": [ 00:18:20.728 { 00:18:20.728 "name": "BaseBdev1", 00:18:20.728 "uuid": "87ed476a-e984-5e1e-9f6e-8b184f38a18a", 00:18:20.728 "is_configured": true, 00:18:20.728 "data_offset": 0, 00:18:20.728 "data_size": 65536 00:18:20.728 }, 00:18:20.728 { 00:18:20.728 "name": "BaseBdev2", 00:18:20.728 "uuid": "b2f21d52-8c4f-53a6-bb66-aaaffe81b51d", 00:18:20.728 "is_configured": true, 00:18:20.728 "data_offset": 0, 00:18:20.728 "data_size": 65536 00:18:20.728 }, 00:18:20.728 { 00:18:20.728 "name": "BaseBdev3", 00:18:20.728 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:20.728 "is_configured": true, 00:18:20.728 "data_offset": 0, 00:18:20.728 "data_size": 65536 00:18:20.728 }, 00:18:20.728 { 00:18:20.728 "name": "BaseBdev4", 00:18:20.728 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:20.728 "is_configured": true, 00:18:20.728 "data_offset": 0, 00:18:20.728 "data_size": 65536 00:18:20.728 } 00:18:20.728 ] 00:18:20.728 }' 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.728 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.296 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.296 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.296 11:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.296 11:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:21.296 [2024-11-20 11:30:29.005053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.296 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.296 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:21.296 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:21.296 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.296 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.296 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.296 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:21.297 [2024-11-20 11:30:29.100630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.297 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.558 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.558 "name": "raid_bdev1", 00:18:21.558 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:21.558 "strip_size_kb": 0, 00:18:21.558 "state": "online", 00:18:21.558 "raid_level": "raid1", 00:18:21.558 "superblock": false, 00:18:21.558 "num_base_bdevs": 4, 00:18:21.558 "num_base_bdevs_discovered": 3, 00:18:21.558 "num_base_bdevs_operational": 3, 00:18:21.558 "base_bdevs_list": [ 00:18:21.558 { 00:18:21.558 "name": null, 00:18:21.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.558 "is_configured": false, 00:18:21.558 "data_offset": 0, 00:18:21.558 "data_size": 65536 00:18:21.558 }, 00:18:21.558 { 00:18:21.558 "name": "BaseBdev2", 00:18:21.558 "uuid": "b2f21d52-8c4f-53a6-bb66-aaaffe81b51d", 00:18:21.558 "is_configured": true, 00:18:21.558 "data_offset": 0, 00:18:21.558 "data_size": 65536 00:18:21.558 }, 00:18:21.558 { 00:18:21.559 "name": "BaseBdev3", 00:18:21.559 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:21.559 "is_configured": true, 00:18:21.559 "data_offset": 0, 00:18:21.559 "data_size": 65536 00:18:21.559 }, 00:18:21.559 { 00:18:21.559 "name": "BaseBdev4", 00:18:21.559 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:21.559 "is_configured": true, 00:18:21.559 "data_offset": 0, 00:18:21.559 "data_size": 65536 00:18:21.559 } 00:18:21.559 ] 00:18:21.559 }' 00:18:21.559 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.559 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.559 [2024-11-20 11:30:29.236784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:21.559 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:21.559 Zero copy mechanism will not be used. 00:18:21.559 Running I/O for 60 seconds... 00:18:21.817 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.817 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.817 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.817 [2024-11-20 11:30:29.595062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.817 11:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.817 11:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:22.077 [2024-11-20 11:30:29.694322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:22.077 [2024-11-20 11:30:29.697223] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.077 [2024-11-20 11:30:29.809121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:22.077 [2024-11-20 11:30:29.810100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:22.336 [2024-11-20 11:30:29.954332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:22.336 [2024-11-20 11:30:29.954965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:22.594 127.00 IOPS, 381.00 MiB/s [2024-11-20T11:30:30.440Z] [2024-11-20 11:30:30.315027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:22.852 [2024-11-20 11:30:30.439060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.852 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.110 "name": "raid_bdev1", 00:18:23.110 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:23.110 "strip_size_kb": 0, 00:18:23.110 "state": "online", 00:18:23.110 "raid_level": "raid1", 00:18:23.110 "superblock": false, 00:18:23.110 "num_base_bdevs": 4, 00:18:23.110 "num_base_bdevs_discovered": 4, 00:18:23.110 "num_base_bdevs_operational": 4, 00:18:23.110 "process": { 00:18:23.110 "type": "rebuild", 00:18:23.110 "target": "spare", 00:18:23.110 "progress": { 00:18:23.110 "blocks": 12288, 00:18:23.110 "percent": 18 00:18:23.110 } 00:18:23.110 }, 00:18:23.110 "base_bdevs_list": [ 00:18:23.110 { 00:18:23.110 "name": "spare", 00:18:23.110 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:23.110 "is_configured": true, 00:18:23.110 "data_offset": 0, 00:18:23.110 "data_size": 65536 00:18:23.110 }, 00:18:23.110 { 00:18:23.110 "name": "BaseBdev2", 00:18:23.110 "uuid": "b2f21d52-8c4f-53a6-bb66-aaaffe81b51d", 00:18:23.110 "is_configured": true, 00:18:23.110 "data_offset": 0, 00:18:23.110 "data_size": 65536 00:18:23.110 }, 00:18:23.110 { 00:18:23.110 "name": "BaseBdev3", 00:18:23.110 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:23.110 "is_configured": true, 00:18:23.110 "data_offset": 0, 00:18:23.110 "data_size": 65536 00:18:23.110 }, 00:18:23.110 { 00:18:23.110 "name": "BaseBdev4", 00:18:23.110 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:23.110 "is_configured": true, 00:18:23.110 "data_offset": 0, 00:18:23.110 "data_size": 65536 00:18:23.110 } 00:18:23.110 ] 00:18:23.110 }' 00:18:23.110 [2024-11-20 11:30:30.706665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.110 11:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.110 [2024-11-20 11:30:30.794697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.110 [2024-11-20 11:30:30.829802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:23.110 [2024-11-20 11:30:30.830963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:23.110 [2024-11-20 11:30:30.943372] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:23.369 [2024-11-20 11:30:30.960398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.370 [2024-11-20 11:30:30.960877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.370 [2024-11-20 11:30:30.960944] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:23.370 [2024-11-20 11:30:30.997407] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.370 "name": "raid_bdev1", 00:18:23.370 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:23.370 "strip_size_kb": 0, 00:18:23.370 "state": "online", 00:18:23.370 "raid_level": "raid1", 00:18:23.370 "superblock": false, 00:18:23.370 "num_base_bdevs": 4, 00:18:23.370 "num_base_bdevs_discovered": 3, 00:18:23.370 "num_base_bdevs_operational": 3, 00:18:23.370 "base_bdevs_list": [ 00:18:23.370 { 00:18:23.370 "name": null, 00:18:23.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.370 "is_configured": false, 00:18:23.370 "data_offset": 0, 00:18:23.370 "data_size": 65536 00:18:23.370 }, 00:18:23.370 { 00:18:23.370 "name": "BaseBdev2", 00:18:23.370 "uuid": "b2f21d52-8c4f-53a6-bb66-aaaffe81b51d", 00:18:23.370 "is_configured": true, 00:18:23.370 "data_offset": 0, 00:18:23.370 "data_size": 65536 00:18:23.370 }, 00:18:23.370 { 00:18:23.370 "name": "BaseBdev3", 00:18:23.370 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:23.370 "is_configured": true, 00:18:23.370 "data_offset": 0, 00:18:23.370 "data_size": 65536 00:18:23.370 }, 00:18:23.370 { 00:18:23.370 "name": "BaseBdev4", 00:18:23.370 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:23.370 "is_configured": true, 00:18:23.370 "data_offset": 0, 00:18:23.370 "data_size": 65536 00:18:23.370 } 00:18:23.370 ] 00:18:23.370 }' 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.370 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 110.50 IOPS, 331.50 MiB/s [2024-11-20T11:30:31.475Z] 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.629 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.888 "name": "raid_bdev1", 00:18:23.888 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:23.888 "strip_size_kb": 0, 00:18:23.888 "state": "online", 00:18:23.888 "raid_level": "raid1", 00:18:23.888 "superblock": false, 00:18:23.888 "num_base_bdevs": 4, 00:18:23.888 "num_base_bdevs_discovered": 3, 00:18:23.888 "num_base_bdevs_operational": 3, 00:18:23.888 "base_bdevs_list": [ 00:18:23.888 { 00:18:23.888 "name": null, 00:18:23.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.888 "is_configured": false, 00:18:23.888 "data_offset": 0, 00:18:23.888 "data_size": 65536 00:18:23.888 }, 00:18:23.888 { 00:18:23.888 "name": "BaseBdev2", 00:18:23.888 "uuid": "b2f21d52-8c4f-53a6-bb66-aaaffe81b51d", 00:18:23.888 "is_configured": true, 00:18:23.888 "data_offset": 0, 00:18:23.888 "data_size": 65536 00:18:23.888 }, 00:18:23.888 { 00:18:23.888 "name": "BaseBdev3", 00:18:23.888 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:23.888 "is_configured": true, 00:18:23.888 "data_offset": 0, 00:18:23.888 "data_size": 65536 00:18:23.888 }, 00:18:23.888 { 00:18:23.888 "name": "BaseBdev4", 00:18:23.888 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:23.888 "is_configured": true, 00:18:23.888 "data_offset": 0, 00:18:23.888 "data_size": 65536 00:18:23.888 } 00:18:23.888 ] 00:18:23.888 }' 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 [2024-11-20 11:30:31.616350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.888 11:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:23.888 [2024-11-20 11:30:31.722259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:23.888 [2024-11-20 11:30:31.725267] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.147 [2024-11-20 11:30:31.840759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:24.147 [2024-11-20 11:30:31.841688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:24.147 [2024-11-20 11:30:31.968009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:24.147 [2024-11-20 11:30:31.968510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:24.715 112.67 IOPS, 338.00 MiB/s [2024-11-20T11:30:32.561Z] [2024-11-20 11:30:32.380454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:24.974 [2024-11-20 11:30:32.623166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.974 "name": "raid_bdev1", 00:18:24.974 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:24.974 "strip_size_kb": 0, 00:18:24.974 "state": "online", 00:18:24.974 "raid_level": "raid1", 00:18:24.974 "superblock": false, 00:18:24.974 "num_base_bdevs": 4, 00:18:24.974 "num_base_bdevs_discovered": 4, 00:18:24.974 "num_base_bdevs_operational": 4, 00:18:24.974 "process": { 00:18:24.974 "type": "rebuild", 00:18:24.974 "target": "spare", 00:18:24.974 "progress": { 00:18:24.974 "blocks": 14336, 00:18:24.974 "percent": 21 00:18:24.974 } 00:18:24.974 }, 00:18:24.974 "base_bdevs_list": [ 00:18:24.974 { 00:18:24.974 "name": "spare", 00:18:24.974 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:24.974 "is_configured": true, 00:18:24.974 "data_offset": 0, 00:18:24.974 "data_size": 65536 00:18:24.974 }, 00:18:24.974 { 00:18:24.974 "name": "BaseBdev2", 00:18:24.974 "uuid": "b2f21d52-8c4f-53a6-bb66-aaaffe81b51d", 00:18:24.974 "is_configured": true, 00:18:24.974 "data_offset": 0, 00:18:24.974 "data_size": 65536 00:18:24.974 }, 00:18:24.974 { 00:18:24.974 "name": "BaseBdev3", 00:18:24.974 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:24.974 "is_configured": true, 00:18:24.974 "data_offset": 0, 00:18:24.974 "data_size": 65536 00:18:24.974 }, 00:18:24.974 { 00:18:24.974 "name": "BaseBdev4", 00:18:24.974 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:24.974 "is_configured": true, 00:18:24.974 "data_offset": 0, 00:18:24.974 "data_size": 65536 00:18:24.974 } 00:18:24.974 ] 00:18:24.974 }' 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.974 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.233 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.233 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:25.233 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:25.233 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:25.233 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:25.233 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:25.233 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.234 [2024-11-20 11:30:32.839920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:25.234 [2024-11-20 11:30:32.860485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:25.234 [2024-11-20 11:30:32.972919] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:25.234 [2024-11-20 11:30:32.972998] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.234 11:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.234 11:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.234 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.234 "name": "raid_bdev1", 00:18:25.234 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:25.234 "strip_size_kb": 0, 00:18:25.234 "state": "online", 00:18:25.234 "raid_level": "raid1", 00:18:25.234 "superblock": false, 00:18:25.234 "num_base_bdevs": 4, 00:18:25.234 "num_base_bdevs_discovered": 3, 00:18:25.234 "num_base_bdevs_operational": 3, 00:18:25.234 "process": { 00:18:25.234 "type": "rebuild", 00:18:25.234 "target": "spare", 00:18:25.234 "progress": { 00:18:25.234 "blocks": 16384, 00:18:25.234 "percent": 25 00:18:25.234 } 00:18:25.234 }, 00:18:25.234 "base_bdevs_list": [ 00:18:25.234 { 00:18:25.234 "name": "spare", 00:18:25.234 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:25.234 "is_configured": true, 00:18:25.234 "data_offset": 0, 00:18:25.234 "data_size": 65536 00:18:25.234 }, 00:18:25.234 { 00:18:25.234 "name": null, 00:18:25.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.234 "is_configured": false, 00:18:25.234 "data_offset": 0, 00:18:25.234 "data_size": 65536 00:18:25.234 }, 00:18:25.234 { 00:18:25.234 "name": "BaseBdev3", 00:18:25.234 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:25.234 "is_configured": true, 00:18:25.234 "data_offset": 0, 00:18:25.234 "data_size": 65536 00:18:25.234 }, 00:18:25.234 { 00:18:25.234 "name": "BaseBdev4", 00:18:25.234 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:25.234 "is_configured": true, 00:18:25.234 "data_offset": 0, 00:18:25.234 "data_size": 65536 00:18:25.234 } 00:18:25.234 ] 00:18:25.234 }' 00:18:25.234 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.493 "name": "raid_bdev1", 00:18:25.493 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:25.493 "strip_size_kb": 0, 00:18:25.493 "state": "online", 00:18:25.493 "raid_level": "raid1", 00:18:25.493 "superblock": false, 00:18:25.493 "num_base_bdevs": 4, 00:18:25.493 "num_base_bdevs_discovered": 3, 00:18:25.493 "num_base_bdevs_operational": 3, 00:18:25.493 "process": { 00:18:25.493 "type": "rebuild", 00:18:25.493 "target": "spare", 00:18:25.493 "progress": { 00:18:25.493 "blocks": 18432, 00:18:25.493 "percent": 28 00:18:25.493 } 00:18:25.493 }, 00:18:25.493 "base_bdevs_list": [ 00:18:25.493 { 00:18:25.493 "name": "spare", 00:18:25.493 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:25.493 "is_configured": true, 00:18:25.493 "data_offset": 0, 00:18:25.493 "data_size": 65536 00:18:25.493 }, 00:18:25.493 { 00:18:25.493 "name": null, 00:18:25.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.493 "is_configured": false, 00:18:25.493 "data_offset": 0, 00:18:25.493 "data_size": 65536 00:18:25.493 }, 00:18:25.493 { 00:18:25.493 "name": "BaseBdev3", 00:18:25.493 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:25.493 "is_configured": true, 00:18:25.493 "data_offset": 0, 00:18:25.493 "data_size": 65536 00:18:25.493 }, 00:18:25.493 { 00:18:25.493 "name": "BaseBdev4", 00:18:25.493 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:25.493 "is_configured": true, 00:18:25.493 "data_offset": 0, 00:18:25.493 "data_size": 65536 00:18:25.493 } 00:18:25.493 ] 00:18:25.493 }' 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.493 [2024-11-20 11:30:33.250859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:25.493 101.75 IOPS, 305.25 MiB/s [2024-11-20T11:30:33.339Z] 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.493 11:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.752 [2024-11-20 11:30:33.485800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:25.752 [2024-11-20 11:30:33.486633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:26.011 [2024-11-20 11:30:33.824460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:26.011 [2024-11-20 11:30:33.826399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:26.269 [2024-11-20 11:30:34.043622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:26.269 [2024-11-20 11:30:34.044425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:26.528 93.60 IOPS, 280.80 MiB/s [2024-11-20T11:30:34.374Z] [2024-11-20 11:30:34.315644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.528 11:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.787 11:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.787 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.787 "name": "raid_bdev1", 00:18:26.787 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:26.787 "strip_size_kb": 0, 00:18:26.787 "state": "online", 00:18:26.787 "raid_level": "raid1", 00:18:26.787 "superblock": false, 00:18:26.787 "num_base_bdevs": 4, 00:18:26.787 "num_base_bdevs_discovered": 3, 00:18:26.787 "num_base_bdevs_operational": 3, 00:18:26.787 "process": { 00:18:26.787 "type": "rebuild", 00:18:26.787 "target": "spare", 00:18:26.787 "progress": { 00:18:26.787 "blocks": 32768, 00:18:26.787 "percent": 50 00:18:26.787 } 00:18:26.787 }, 00:18:26.787 "base_bdevs_list": [ 00:18:26.787 { 00:18:26.787 "name": "spare", 00:18:26.787 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:26.787 "is_configured": true, 00:18:26.787 "data_offset": 0, 00:18:26.787 "data_size": 65536 00:18:26.787 }, 00:18:26.787 { 00:18:26.787 "name": null, 00:18:26.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.787 "is_configured": false, 00:18:26.787 "data_offset": 0, 00:18:26.787 "data_size": 65536 00:18:26.787 }, 00:18:26.787 { 00:18:26.787 "name": "BaseBdev3", 00:18:26.787 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:26.787 "is_configured": true, 00:18:26.787 "data_offset": 0, 00:18:26.787 "data_size": 65536 00:18:26.787 }, 00:18:26.787 { 00:18:26.787 "name": "BaseBdev4", 00:18:26.787 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:26.787 "is_configured": true, 00:18:26.787 "data_offset": 0, 00:18:26.787 "data_size": 65536 00:18:26.787 } 00:18:26.787 ] 00:18:26.787 }' 00:18:26.787 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.787 [2024-11-20 11:30:34.430559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:26.787 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.787 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.787 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.787 11:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.353 [2024-11-20 11:30:34.901926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:27.353 [2024-11-20 11:30:35.146370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:27.611 88.50 IOPS, 265.50 MiB/s [2024-11-20T11:30:35.457Z] [2024-11-20 11:30:35.278657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.870 "name": "raid_bdev1", 00:18:27.870 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:27.870 "strip_size_kb": 0, 00:18:27.870 "state": "online", 00:18:27.870 "raid_level": "raid1", 00:18:27.870 "superblock": false, 00:18:27.870 "num_base_bdevs": 4, 00:18:27.870 "num_base_bdevs_discovered": 3, 00:18:27.870 "num_base_bdevs_operational": 3, 00:18:27.870 "process": { 00:18:27.870 "type": "rebuild", 00:18:27.870 "target": "spare", 00:18:27.870 "progress": { 00:18:27.870 "blocks": 49152, 00:18:27.870 "percent": 75 00:18:27.870 } 00:18:27.870 }, 00:18:27.870 "base_bdevs_list": [ 00:18:27.870 { 00:18:27.870 "name": "spare", 00:18:27.870 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:27.870 "is_configured": true, 00:18:27.870 "data_offset": 0, 00:18:27.870 "data_size": 65536 00:18:27.870 }, 00:18:27.870 { 00:18:27.870 "name": null, 00:18:27.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.870 "is_configured": false, 00:18:27.870 "data_offset": 0, 00:18:27.870 "data_size": 65536 00:18:27.870 }, 00:18:27.870 { 00:18:27.870 "name": "BaseBdev3", 00:18:27.870 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:27.870 "is_configured": true, 00:18:27.870 "data_offset": 0, 00:18:27.870 "data_size": 65536 00:18:27.870 }, 00:18:27.870 { 00:18:27.870 "name": "BaseBdev4", 00:18:27.870 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:27.870 "is_configured": true, 00:18:27.870 "data_offset": 0, 00:18:27.870 "data_size": 65536 00:18:27.870 } 00:18:27.870 ] 00:18:27.870 }' 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.870 [2024-11-20 11:30:35.632226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.870 11:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:28.438 [2024-11-20 11:30:36.089735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:28.697 79.71 IOPS, 239.14 MiB/s [2024-11-20T11:30:36.543Z] [2024-11-20 11:30:36.436303] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:28.956 [2024-11-20 11:30:36.545756] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:28.956 [2024-11-20 11:30:36.550297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.956 "name": "raid_bdev1", 00:18:28.956 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:28.956 "strip_size_kb": 0, 00:18:28.956 "state": "online", 00:18:28.956 "raid_level": "raid1", 00:18:28.956 "superblock": false, 00:18:28.956 "num_base_bdevs": 4, 00:18:28.956 "num_base_bdevs_discovered": 3, 00:18:28.956 "num_base_bdevs_operational": 3, 00:18:28.956 "base_bdevs_list": [ 00:18:28.956 { 00:18:28.956 "name": "spare", 00:18:28.956 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:28.956 "is_configured": true, 00:18:28.956 "data_offset": 0, 00:18:28.956 "data_size": 65536 00:18:28.956 }, 00:18:28.956 { 00:18:28.956 "name": null, 00:18:28.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.956 "is_configured": false, 00:18:28.956 "data_offset": 0, 00:18:28.956 "data_size": 65536 00:18:28.956 }, 00:18:28.956 { 00:18:28.956 "name": "BaseBdev3", 00:18:28.956 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:28.956 "is_configured": true, 00:18:28.956 "data_offset": 0, 00:18:28.956 "data_size": 65536 00:18:28.956 }, 00:18:28.956 { 00:18:28.956 "name": "BaseBdev4", 00:18:28.956 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:28.956 "is_configured": true, 00:18:28.956 "data_offset": 0, 00:18:28.956 "data_size": 65536 00:18:28.956 } 00:18:28.956 ] 00:18:28.956 }' 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:28.956 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.216 "name": "raid_bdev1", 00:18:29.216 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:29.216 "strip_size_kb": 0, 00:18:29.216 "state": "online", 00:18:29.216 "raid_level": "raid1", 00:18:29.216 "superblock": false, 00:18:29.216 "num_base_bdevs": 4, 00:18:29.216 "num_base_bdevs_discovered": 3, 00:18:29.216 "num_base_bdevs_operational": 3, 00:18:29.216 "base_bdevs_list": [ 00:18:29.216 { 00:18:29.216 "name": "spare", 00:18:29.216 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:29.216 "is_configured": true, 00:18:29.216 "data_offset": 0, 00:18:29.216 "data_size": 65536 00:18:29.216 }, 00:18:29.216 { 00:18:29.216 "name": null, 00:18:29.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.216 "is_configured": false, 00:18:29.216 "data_offset": 0, 00:18:29.216 "data_size": 65536 00:18:29.216 }, 00:18:29.216 { 00:18:29.216 "name": "BaseBdev3", 00:18:29.216 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:29.216 "is_configured": true, 00:18:29.216 "data_offset": 0, 00:18:29.216 "data_size": 65536 00:18:29.216 }, 00:18:29.216 { 00:18:29.216 "name": "BaseBdev4", 00:18:29.216 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:29.216 "is_configured": true, 00:18:29.216 "data_offset": 0, 00:18:29.216 "data_size": 65536 00:18:29.216 } 00:18:29.216 ] 00:18:29.216 }' 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.216 11:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.216 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.216 "name": "raid_bdev1", 00:18:29.216 "uuid": "5f7ed00c-4ab9-4db0-8baf-75c2e32b1c08", 00:18:29.216 "strip_size_kb": 0, 00:18:29.216 "state": "online", 00:18:29.216 "raid_level": "raid1", 00:18:29.216 "superblock": false, 00:18:29.216 "num_base_bdevs": 4, 00:18:29.216 "num_base_bdevs_discovered": 3, 00:18:29.216 "num_base_bdevs_operational": 3, 00:18:29.216 "base_bdevs_list": [ 00:18:29.217 { 00:18:29.217 "name": "spare", 00:18:29.217 "uuid": "401e3aab-ead5-5c3e-a91f-fb770d322ee7", 00:18:29.217 "is_configured": true, 00:18:29.217 "data_offset": 0, 00:18:29.217 "data_size": 65536 00:18:29.217 }, 00:18:29.217 { 00:18:29.217 "name": null, 00:18:29.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.217 "is_configured": false, 00:18:29.217 "data_offset": 0, 00:18:29.217 "data_size": 65536 00:18:29.217 }, 00:18:29.217 { 00:18:29.217 "name": "BaseBdev3", 00:18:29.217 "uuid": "3e4dcd61-49d7-56be-8f8c-96017c4953b4", 00:18:29.217 "is_configured": true, 00:18:29.217 "data_offset": 0, 00:18:29.217 "data_size": 65536 00:18:29.217 }, 00:18:29.217 { 00:18:29.217 "name": "BaseBdev4", 00:18:29.217 "uuid": "2383dbee-28a7-5c77-bea0-810e47349df9", 00:18:29.217 "is_configured": true, 00:18:29.217 "data_offset": 0, 00:18:29.217 "data_size": 65536 00:18:29.217 } 00:18:29.217 ] 00:18:29.217 }' 00:18:29.217 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.217 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.733 74.00 IOPS, 222.00 MiB/s [2024-11-20T11:30:37.579Z] 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:29.733 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.733 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.733 [2024-11-20 11:30:37.523219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.734 [2024-11-20 11:30:37.523589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.734 00:18:29.734 Latency(us) 00:18:29.734 [2024-11-20T11:30:37.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.734 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:29.734 raid_bdev1 : 8.33 71.93 215.79 0.00 0.00 18455.32 299.75 122969.37 00:18:29.734 [2024-11-20T11:30:37.580Z] =================================================================================================================== 00:18:29.734 [2024-11-20T11:30:37.580Z] Total : 71.93 215.79 0.00 0.00 18455.32 299.75 122969.37 00:18:29.993 [2024-11-20 11:30:37.588376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.993 { 00:18:29.993 "results": [ 00:18:29.993 { 00:18:29.993 "job": "raid_bdev1", 00:18:29.993 "core_mask": "0x1", 00:18:29.993 "workload": "randrw", 00:18:29.993 "percentage": 50, 00:18:29.993 "status": "finished", 00:18:29.993 "queue_depth": 2, 00:18:29.993 "io_size": 3145728, 00:18:29.993 "runtime": 8.327428, 00:18:29.993 "iops": 71.93097316482353, 00:18:29.993 "mibps": 215.7929194944706, 00:18:29.993 "io_failed": 0, 00:18:29.993 "io_timeout": 0, 00:18:29.993 "avg_latency_us": 18455.322458643193, 00:18:29.993 "min_latency_us": 299.75272727272727, 00:18:29.993 "max_latency_us": 122969.36727272728 00:18:29.993 } 00:18:29.993 ], 00:18:29.993 "core_count": 1 00:18:29.993 } 00:18:29.993 [2024-11-20 11:30:37.588989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.993 [2024-11-20 11:30:37.589177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.993 [2024-11-20 11:30:37.589240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.993 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:30.252 /dev/nbd0 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:30.252 1+0 records in 00:18:30.252 1+0 records out 00:18:30.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370068 s, 11.1 MB/s 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:30.252 11:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:30.252 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:30.511 /dev/nbd1 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:30.511 1+0 records in 00:18:30.511 1+0 records out 00:18:30.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395594 s, 10.4 MB/s 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:30.511 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:30.769 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:30.769 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.769 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:30.769 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.769 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:30.769 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.769 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.061 11:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:31.321 /dev/nbd1 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:31.321 1+0 records in 00:18:31.321 1+0 records out 00:18:31.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582368 s, 7.0 MB/s 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.321 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:31.587 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:31.587 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.587 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:31.587 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:31.587 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:31.587 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:31.587 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:31.846 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78975 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78975 ']' 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78975 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78975 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.105 killing process with pid 78975 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78975' 00:18:32.105 Received shutdown signal, test time was about 10.643495 seconds 00:18:32.105 00:18:32.105 Latency(us) 00:18:32.105 [2024-11-20T11:30:39.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.105 [2024-11-20T11:30:39.951Z] =================================================================================================================== 00:18:32.105 [2024-11-20T11:30:39.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78975 00:18:32.105 [2024-11-20 11:30:39.883030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.105 11:30:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78975 00:18:32.673 [2024-11-20 11:30:40.265633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.610 11:30:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:33.610 00:18:33.610 real 0m14.351s 00:18:33.610 user 0m18.879s 00:18:33.610 sys 0m1.733s 00:18:33.610 11:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.610 11:30:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.610 ************************************ 00:18:33.610 END TEST raid_rebuild_test_io 00:18:33.610 ************************************ 00:18:33.610 11:30:41 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:18:33.610 11:30:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:33.610 11:30:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.610 11:30:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.611 ************************************ 00:18:33.611 START TEST raid_rebuild_test_sb_io 00:18:33.611 ************************************ 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79405 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79405 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79405 ']' 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:33.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.611 11:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.869 [2024-11-20 11:30:41.524876] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:18:33.869 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:33.869 Zero copy mechanism will not be used. 00:18:33.869 [2024-11-20 11:30:41.525063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79405 ] 00:18:33.869 [2024-11-20 11:30:41.713125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.128 [2024-11-20 11:30:41.859327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.387 [2024-11-20 11:30:42.065455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.387 [2024-11-20 11:30:42.065515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.955 BaseBdev1_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.955 [2024-11-20 11:30:42.568048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:34.955 [2024-11-20 11:30:42.568137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.955 [2024-11-20 11:30:42.568175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:34.955 [2024-11-20 11:30:42.568195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.955 [2024-11-20 11:30:42.571119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.955 [2024-11-20 11:30:42.571175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.955 BaseBdev1 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.955 BaseBdev2_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.955 [2024-11-20 11:30:42.624416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:34.955 [2024-11-20 11:30:42.624499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.955 [2024-11-20 11:30:42.624529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:34.955 [2024-11-20 11:30:42.624549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.955 [2024-11-20 11:30:42.627373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.955 [2024-11-20 11:30:42.627424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.955 BaseBdev2 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.955 BaseBdev3_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.955 [2024-11-20 11:30:42.686282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:34.955 [2024-11-20 11:30:42.686359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.955 [2024-11-20 11:30:42.686403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:34.955 [2024-11-20 11:30:42.686422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.955 [2024-11-20 11:30:42.689779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.955 [2024-11-20 11:30:42.689838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.955 BaseBdev3 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.955 BaseBdev4_malloc 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.955 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.956 [2024-11-20 11:30:42.743061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:34.956 [2024-11-20 11:30:42.743133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.956 [2024-11-20 11:30:42.743162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:34.956 [2024-11-20 11:30:42.743184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.956 [2024-11-20 11:30:42.745991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.956 [2024-11-20 11:30:42.746046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:34.956 BaseBdev4 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.956 spare_malloc 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.956 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.214 spare_delay 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.215 [2024-11-20 11:30:42.803317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:35.215 [2024-11-20 11:30:42.803394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.215 [2024-11-20 11:30:42.803433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:35.215 [2024-11-20 11:30:42.803451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.215 [2024-11-20 11:30:42.806289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.215 [2024-11-20 11:30:42.806343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:35.215 spare 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.215 [2024-11-20 11:30:42.815391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.215 [2024-11-20 11:30:42.817851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.215 [2024-11-20 11:30:42.817959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.215 [2024-11-20 11:30:42.818044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:35.215 [2024-11-20 11:30:42.818306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:35.215 [2024-11-20 11:30:42.818350] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:35.215 [2024-11-20 11:30:42.818695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:35.215 [2024-11-20 11:30:42.818962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:35.215 [2024-11-20 11:30:42.818990] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:35.215 [2024-11-20 11:30:42.819185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.215 "name": "raid_bdev1", 00:18:35.215 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:35.215 "strip_size_kb": 0, 00:18:35.215 "state": "online", 00:18:35.215 "raid_level": "raid1", 00:18:35.215 "superblock": true, 00:18:35.215 "num_base_bdevs": 4, 00:18:35.215 "num_base_bdevs_discovered": 4, 00:18:35.215 "num_base_bdevs_operational": 4, 00:18:35.215 "base_bdevs_list": [ 00:18:35.215 { 00:18:35.215 "name": "BaseBdev1", 00:18:35.215 "uuid": "544dffd2-adcd-55e9-bb4f-083ba68a9bf3", 00:18:35.215 "is_configured": true, 00:18:35.215 "data_offset": 2048, 00:18:35.215 "data_size": 63488 00:18:35.215 }, 00:18:35.215 { 00:18:35.215 "name": "BaseBdev2", 00:18:35.215 "uuid": "fc434b56-bafc-5c22-831f-c4dc47f83f20", 00:18:35.215 "is_configured": true, 00:18:35.215 "data_offset": 2048, 00:18:35.215 "data_size": 63488 00:18:35.215 }, 00:18:35.215 { 00:18:35.215 "name": "BaseBdev3", 00:18:35.215 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:35.215 "is_configured": true, 00:18:35.215 "data_offset": 2048, 00:18:35.215 "data_size": 63488 00:18:35.215 }, 00:18:35.215 { 00:18:35.215 "name": "BaseBdev4", 00:18:35.215 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:35.215 "is_configured": true, 00:18:35.215 "data_offset": 2048, 00:18:35.215 "data_size": 63488 00:18:35.215 } 00:18:35.215 ] 00:18:35.215 }' 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.215 11:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.783 [2024-11-20 11:30:43.328001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.783 [2024-11-20 11:30:43.431539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.783 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.783 "name": "raid_bdev1", 00:18:35.783 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:35.783 "strip_size_kb": 0, 00:18:35.783 "state": "online", 00:18:35.783 "raid_level": "raid1", 00:18:35.783 "superblock": true, 00:18:35.783 "num_base_bdevs": 4, 00:18:35.783 "num_base_bdevs_discovered": 3, 00:18:35.783 "num_base_bdevs_operational": 3, 00:18:35.783 "base_bdevs_list": [ 00:18:35.783 { 00:18:35.783 "name": null, 00:18:35.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.783 "is_configured": false, 00:18:35.783 "data_offset": 0, 00:18:35.783 "data_size": 63488 00:18:35.783 }, 00:18:35.783 { 00:18:35.783 "name": "BaseBdev2", 00:18:35.783 "uuid": "fc434b56-bafc-5c22-831f-c4dc47f83f20", 00:18:35.783 "is_configured": true, 00:18:35.784 "data_offset": 2048, 00:18:35.784 "data_size": 63488 00:18:35.784 }, 00:18:35.784 { 00:18:35.784 "name": "BaseBdev3", 00:18:35.784 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:35.784 "is_configured": true, 00:18:35.784 "data_offset": 2048, 00:18:35.784 "data_size": 63488 00:18:35.784 }, 00:18:35.784 { 00:18:35.784 "name": "BaseBdev4", 00:18:35.784 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:35.784 "is_configured": true, 00:18:35.784 "data_offset": 2048, 00:18:35.784 "data_size": 63488 00:18:35.784 } 00:18:35.784 ] 00:18:35.784 }' 00:18:35.784 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.784 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.784 [2024-11-20 11:30:43.563714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:35.784 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:35.784 Zero copy mechanism will not be used. 00:18:35.784 Running I/O for 60 seconds... 00:18:36.351 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:36.351 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.351 11:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.351 [2024-11-20 11:30:43.979542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.351 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.351 11:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:36.351 [2024-11-20 11:30:44.075450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:36.351 [2024-11-20 11:30:44.078276] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.609 [2024-11-20 11:30:44.205312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:36.609 [2024-11-20 11:30:44.338496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:36.609 [2024-11-20 11:30:44.338958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:36.868 163.00 IOPS, 489.00 MiB/s [2024-11-20T11:30:44.714Z] [2024-11-20 11:30:44.636187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:37.132 [2024-11-20 11:30:44.770672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:37.132 [2024-11-20 11:30:44.771537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.391 "name": "raid_bdev1", 00:18:37.391 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:37.391 "strip_size_kb": 0, 00:18:37.391 "state": "online", 00:18:37.391 "raid_level": "raid1", 00:18:37.391 "superblock": true, 00:18:37.391 "num_base_bdevs": 4, 00:18:37.391 "num_base_bdevs_discovered": 4, 00:18:37.391 "num_base_bdevs_operational": 4, 00:18:37.391 "process": { 00:18:37.391 "type": "rebuild", 00:18:37.391 "target": "spare", 00:18:37.391 "progress": { 00:18:37.391 "blocks": 12288, 00:18:37.391 "percent": 19 00:18:37.391 } 00:18:37.391 }, 00:18:37.391 "base_bdevs_list": [ 00:18:37.391 { 00:18:37.391 "name": "spare", 00:18:37.391 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:37.391 "is_configured": true, 00:18:37.391 "data_offset": 2048, 00:18:37.391 "data_size": 63488 00:18:37.391 }, 00:18:37.391 { 00:18:37.391 "name": "BaseBdev2", 00:18:37.391 "uuid": "fc434b56-bafc-5c22-831f-c4dc47f83f20", 00:18:37.391 "is_configured": true, 00:18:37.391 "data_offset": 2048, 00:18:37.391 "data_size": 63488 00:18:37.391 }, 00:18:37.391 { 00:18:37.391 "name": "BaseBdev3", 00:18:37.391 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:37.391 "is_configured": true, 00:18:37.391 "data_offset": 2048, 00:18:37.391 "data_size": 63488 00:18:37.391 }, 00:18:37.391 { 00:18:37.391 "name": "BaseBdev4", 00:18:37.391 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:37.391 "is_configured": true, 00:18:37.391 "data_offset": 2048, 00:18:37.391 "data_size": 63488 00:18:37.391 } 00:18:37.391 ] 00:18:37.391 }' 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.391 [2024-11-20 11:30:45.101061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.391 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.391 [2024-11-20 11:30:45.190187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.651 [2024-11-20 11:30:45.243002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:37.651 [2024-11-20 11:30:45.373977] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.651 [2024-11-20 11:30:45.388511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.651 [2024-11-20 11:30:45.388599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.651 [2024-11-20 11:30:45.388644] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.651 [2024-11-20 11:30:45.413827] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.651 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.911 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.911 "name": "raid_bdev1", 00:18:37.911 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:37.911 "strip_size_kb": 0, 00:18:37.911 "state": "online", 00:18:37.911 "raid_level": "raid1", 00:18:37.911 "superblock": true, 00:18:37.911 "num_base_bdevs": 4, 00:18:37.911 "num_base_bdevs_discovered": 3, 00:18:37.911 "num_base_bdevs_operational": 3, 00:18:37.911 "base_bdevs_list": [ 00:18:37.911 { 00:18:37.911 "name": null, 00:18:37.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.911 "is_configured": false, 00:18:37.911 "data_offset": 0, 00:18:37.911 "data_size": 63488 00:18:37.911 }, 00:18:37.911 { 00:18:37.911 "name": "BaseBdev2", 00:18:37.911 "uuid": "fc434b56-bafc-5c22-831f-c4dc47f83f20", 00:18:37.911 "is_configured": true, 00:18:37.911 "data_offset": 2048, 00:18:37.911 "data_size": 63488 00:18:37.911 }, 00:18:37.911 { 00:18:37.911 "name": "BaseBdev3", 00:18:37.911 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:37.911 "is_configured": true, 00:18:37.911 "data_offset": 2048, 00:18:37.911 "data_size": 63488 00:18:37.911 }, 00:18:37.911 { 00:18:37.911 "name": "BaseBdev4", 00:18:37.911 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:37.911 "is_configured": true, 00:18:37.911 "data_offset": 2048, 00:18:37.911 "data_size": 63488 00:18:37.911 } 00:18:37.911 ] 00:18:37.911 }' 00:18:37.911 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.911 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.170 128.50 IOPS, 385.50 MiB/s [2024-11-20T11:30:46.016Z] 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.170 11:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.430 "name": "raid_bdev1", 00:18:38.430 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:38.430 "strip_size_kb": 0, 00:18:38.430 "state": "online", 00:18:38.430 "raid_level": "raid1", 00:18:38.430 "superblock": true, 00:18:38.430 "num_base_bdevs": 4, 00:18:38.430 "num_base_bdevs_discovered": 3, 00:18:38.430 "num_base_bdevs_operational": 3, 00:18:38.430 "base_bdevs_list": [ 00:18:38.430 { 00:18:38.430 "name": null, 00:18:38.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.430 "is_configured": false, 00:18:38.430 "data_offset": 0, 00:18:38.430 "data_size": 63488 00:18:38.430 }, 00:18:38.430 { 00:18:38.430 "name": "BaseBdev2", 00:18:38.430 "uuid": "fc434b56-bafc-5c22-831f-c4dc47f83f20", 00:18:38.430 "is_configured": true, 00:18:38.430 "data_offset": 2048, 00:18:38.430 "data_size": 63488 00:18:38.430 }, 00:18:38.430 { 00:18:38.430 "name": "BaseBdev3", 00:18:38.430 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:38.430 "is_configured": true, 00:18:38.430 "data_offset": 2048, 00:18:38.430 "data_size": 63488 00:18:38.430 }, 00:18:38.430 { 00:18:38.430 "name": "BaseBdev4", 00:18:38.430 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:38.430 "is_configured": true, 00:18:38.430 "data_offset": 2048, 00:18:38.430 "data_size": 63488 00:18:38.430 } 00:18:38.430 ] 00:18:38.430 }' 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.430 [2024-11-20 11:30:46.141181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.430 11:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:38.430 [2024-11-20 11:30:46.216973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:38.430 [2024-11-20 11:30:46.219664] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.689 [2024-11-20 11:30:46.370578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:38.689 [2024-11-20 11:30:46.371262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:38.948 133.67 IOPS, 401.00 MiB/s [2024-11-20T11:30:46.794Z] [2024-11-20 11:30:46.595664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:38.948 [2024-11-20 11:30:46.596086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:39.208 [2024-11-20 11:30:46.823388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:39.208 [2024-11-20 11:30:46.936264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:39.208 [2024-11-20 11:30:46.936700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.467 "name": "raid_bdev1", 00:18:39.467 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:39.467 "strip_size_kb": 0, 00:18:39.467 "state": "online", 00:18:39.467 "raid_level": "raid1", 00:18:39.467 "superblock": true, 00:18:39.467 "num_base_bdevs": 4, 00:18:39.467 "num_base_bdevs_discovered": 4, 00:18:39.467 "num_base_bdevs_operational": 4, 00:18:39.467 "process": { 00:18:39.467 "type": "rebuild", 00:18:39.467 "target": "spare", 00:18:39.467 "progress": { 00:18:39.467 "blocks": 14336, 00:18:39.467 "percent": 22 00:18:39.467 } 00:18:39.467 }, 00:18:39.467 "base_bdevs_list": [ 00:18:39.467 { 00:18:39.467 "name": "spare", 00:18:39.467 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:39.467 "is_configured": true, 00:18:39.467 "data_offset": 2048, 00:18:39.467 "data_size": 63488 00:18:39.467 }, 00:18:39.467 { 00:18:39.467 "name": "BaseBdev2", 00:18:39.467 "uuid": "fc434b56-bafc-5c22-831f-c4dc47f83f20", 00:18:39.467 "is_configured": true, 00:18:39.467 "data_offset": 2048, 00:18:39.467 "data_size": 63488 00:18:39.467 }, 00:18:39.467 { 00:18:39.467 "name": "BaseBdev3", 00:18:39.467 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:39.467 "is_configured": true, 00:18:39.467 "data_offset": 2048, 00:18:39.467 "data_size": 63488 00:18:39.467 }, 00:18:39.467 { 00:18:39.467 "name": "BaseBdev4", 00:18:39.467 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:39.467 "is_configured": true, 00:18:39.467 "data_offset": 2048, 00:18:39.467 "data_size": 63488 00:18:39.467 } 00:18:39.467 ] 00:18:39.467 }' 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.467 [2024-11-20 11:30:47.267717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.467 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:39.726 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.726 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.726 [2024-11-20 11:30:47.332059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:39.985 117.75 IOPS, 353.25 MiB/s [2024-11-20T11:30:47.831Z] [2024-11-20 11:30:47.578955] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:39.985 [2024-11-20 11:30:47.579017] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.985 "name": "raid_bdev1", 00:18:39.985 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:39.985 "strip_size_kb": 0, 00:18:39.985 "state": "online", 00:18:39.985 "raid_level": "raid1", 00:18:39.985 "superblock": true, 00:18:39.985 "num_base_bdevs": 4, 00:18:39.985 "num_base_bdevs_discovered": 3, 00:18:39.985 "num_base_bdevs_operational": 3, 00:18:39.985 "process": { 00:18:39.985 "type": "rebuild", 00:18:39.985 "target": "spare", 00:18:39.985 "progress": { 00:18:39.985 "blocks": 18432, 00:18:39.985 "percent": 29 00:18:39.985 } 00:18:39.985 }, 00:18:39.985 "base_bdevs_list": [ 00:18:39.985 { 00:18:39.985 "name": "spare", 00:18:39.985 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:39.985 "is_configured": true, 00:18:39.985 "data_offset": 2048, 00:18:39.985 "data_size": 63488 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "name": null, 00:18:39.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.985 "is_configured": false, 00:18:39.985 "data_offset": 0, 00:18:39.985 "data_size": 63488 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "name": "BaseBdev3", 00:18:39.985 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:39.985 "is_configured": true, 00:18:39.985 "data_offset": 2048, 00:18:39.985 "data_size": 63488 00:18:39.985 }, 00:18:39.985 { 00:18:39.985 "name": "BaseBdev4", 00:18:39.985 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:39.985 "is_configured": true, 00:18:39.985 "data_offset": 2048, 00:18:39.985 "data_size": 63488 00:18:39.985 } 00:18:39.985 ] 00:18:39.985 }' 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.985 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=537 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.986 "name": "raid_bdev1", 00:18:39.986 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:39.986 "strip_size_kb": 0, 00:18:39.986 "state": "online", 00:18:39.986 "raid_level": "raid1", 00:18:39.986 "superblock": true, 00:18:39.986 "num_base_bdevs": 4, 00:18:39.986 "num_base_bdevs_discovered": 3, 00:18:39.986 "num_base_bdevs_operational": 3, 00:18:39.986 "process": { 00:18:39.986 "type": "rebuild", 00:18:39.986 "target": "spare", 00:18:39.986 "progress": { 00:18:39.986 "blocks": 20480, 00:18:39.986 "percent": 32 00:18:39.986 } 00:18:39.986 }, 00:18:39.986 "base_bdevs_list": [ 00:18:39.986 { 00:18:39.986 "name": "spare", 00:18:39.986 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:39.986 "is_configured": true, 00:18:39.986 "data_offset": 2048, 00:18:39.986 "data_size": 63488 00:18:39.986 }, 00:18:39.986 { 00:18:39.986 "name": null, 00:18:39.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.986 "is_configured": false, 00:18:39.986 "data_offset": 0, 00:18:39.986 "data_size": 63488 00:18:39.986 }, 00:18:39.986 { 00:18:39.986 "name": "BaseBdev3", 00:18:39.986 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:39.986 "is_configured": true, 00:18:39.986 "data_offset": 2048, 00:18:39.986 "data_size": 63488 00:18:39.986 }, 00:18:39.986 { 00:18:39.986 "name": "BaseBdev4", 00:18:39.986 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:39.986 "is_configured": true, 00:18:39.986 "data_offset": 2048, 00:18:39.986 "data_size": 63488 00:18:39.986 } 00:18:39.986 ] 00:18:39.986 }' 00:18:39.986 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.245 [2024-11-20 11:30:47.845415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:40.245 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.245 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.245 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.245 11:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.504 [2024-11-20 11:30:48.109748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:40.504 [2024-11-20 11:30:48.331211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:41.334 103.20 IOPS, 309.60 MiB/s [2024-11-20T11:30:49.180Z] 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.334 [2024-11-20 11:30:48.923412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:41.334 [2024-11-20 11:30:48.924095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.334 "name": "raid_bdev1", 00:18:41.334 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:41.334 "strip_size_kb": 0, 00:18:41.334 "state": "online", 00:18:41.334 "raid_level": "raid1", 00:18:41.334 "superblock": true, 00:18:41.334 "num_base_bdevs": 4, 00:18:41.334 "num_base_bdevs_discovered": 3, 00:18:41.334 "num_base_bdevs_operational": 3, 00:18:41.334 "process": { 00:18:41.334 "type": "rebuild", 00:18:41.334 "target": "spare", 00:18:41.334 "progress": { 00:18:41.334 "blocks": 36864, 00:18:41.334 "percent": 58 00:18:41.334 } 00:18:41.334 }, 00:18:41.334 "base_bdevs_list": [ 00:18:41.334 { 00:18:41.334 "name": "spare", 00:18:41.334 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:41.334 "is_configured": true, 00:18:41.334 "data_offset": 2048, 00:18:41.334 "data_size": 63488 00:18:41.334 }, 00:18:41.334 { 00:18:41.334 "name": null, 00:18:41.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.334 "is_configured": false, 00:18:41.334 "data_offset": 0, 00:18:41.334 "data_size": 63488 00:18:41.334 }, 00:18:41.334 { 00:18:41.334 "name": "BaseBdev3", 00:18:41.334 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:41.334 "is_configured": true, 00:18:41.334 "data_offset": 2048, 00:18:41.334 "data_size": 63488 00:18:41.334 }, 00:18:41.334 { 00:18:41.334 "name": "BaseBdev4", 00:18:41.334 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:41.334 "is_configured": true, 00:18:41.334 "data_offset": 2048, 00:18:41.334 "data_size": 63488 00:18:41.334 } 00:18:41.334 ] 00:18:41.334 }' 00:18:41.334 11:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.334 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.334 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.334 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.334 11:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.334 [2024-11-20 11:30:49.146832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:41.901 [2024-11-20 11:30:49.500053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:41.901 91.33 IOPS, 274.00 MiB/s [2024-11-20T11:30:49.747Z] [2024-11-20 11:30:49.733121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.468 "name": "raid_bdev1", 00:18:42.468 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:42.468 "strip_size_kb": 0, 00:18:42.468 "state": "online", 00:18:42.468 "raid_level": "raid1", 00:18:42.468 "superblock": true, 00:18:42.468 "num_base_bdevs": 4, 00:18:42.468 "num_base_bdevs_discovered": 3, 00:18:42.468 "num_base_bdevs_operational": 3, 00:18:42.468 "process": { 00:18:42.468 "type": "rebuild", 00:18:42.468 "target": "spare", 00:18:42.468 "progress": { 00:18:42.468 "blocks": 49152, 00:18:42.468 "percent": 77 00:18:42.468 } 00:18:42.468 }, 00:18:42.468 "base_bdevs_list": [ 00:18:42.468 { 00:18:42.468 "name": "spare", 00:18:42.468 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:42.468 "is_configured": true, 00:18:42.468 "data_offset": 2048, 00:18:42.468 "data_size": 63488 00:18:42.468 }, 00:18:42.468 { 00:18:42.468 "name": null, 00:18:42.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.468 "is_configured": false, 00:18:42.468 "data_offset": 0, 00:18:42.468 "data_size": 63488 00:18:42.468 }, 00:18:42.468 { 00:18:42.468 "name": "BaseBdev3", 00:18:42.468 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:42.468 "is_configured": true, 00:18:42.468 "data_offset": 2048, 00:18:42.468 "data_size": 63488 00:18:42.468 }, 00:18:42.468 { 00:18:42.468 "name": "BaseBdev4", 00:18:42.468 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:42.468 "is_configured": true, 00:18:42.468 "data_offset": 2048, 00:18:42.468 "data_size": 63488 00:18:42.468 } 00:18:42.468 ] 00:18:42.468 }' 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.468 11:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.727 [2024-11-20 11:30:50.435380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:42.986 83.00 IOPS, 249.00 MiB/s [2024-11-20T11:30:50.832Z] [2024-11-20 11:30:50.780893] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:43.244 [2024-11-20 11:30:50.880915] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:43.244 [2024-11-20 11:30:50.892764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.503 "name": "raid_bdev1", 00:18:43.503 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:43.503 "strip_size_kb": 0, 00:18:43.503 "state": "online", 00:18:43.503 "raid_level": "raid1", 00:18:43.503 "superblock": true, 00:18:43.503 "num_base_bdevs": 4, 00:18:43.503 "num_base_bdevs_discovered": 3, 00:18:43.503 "num_base_bdevs_operational": 3, 00:18:43.503 "base_bdevs_list": [ 00:18:43.503 { 00:18:43.503 "name": "spare", 00:18:43.503 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:43.503 "is_configured": true, 00:18:43.503 "data_offset": 2048, 00:18:43.503 "data_size": 63488 00:18:43.503 }, 00:18:43.503 { 00:18:43.503 "name": null, 00:18:43.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.503 "is_configured": false, 00:18:43.503 "data_offset": 0, 00:18:43.503 "data_size": 63488 00:18:43.503 }, 00:18:43.503 { 00:18:43.503 "name": "BaseBdev3", 00:18:43.503 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:43.503 "is_configured": true, 00:18:43.503 "data_offset": 2048, 00:18:43.503 "data_size": 63488 00:18:43.503 }, 00:18:43.503 { 00:18:43.503 "name": "BaseBdev4", 00:18:43.503 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:43.503 "is_configured": true, 00:18:43.503 "data_offset": 2048, 00:18:43.503 "data_size": 63488 00:18:43.503 } 00:18:43.503 ] 00:18:43.503 }' 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:43.503 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.762 "name": "raid_bdev1", 00:18:43.762 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:43.762 "strip_size_kb": 0, 00:18:43.762 "state": "online", 00:18:43.762 "raid_level": "raid1", 00:18:43.762 "superblock": true, 00:18:43.762 "num_base_bdevs": 4, 00:18:43.762 "num_base_bdevs_discovered": 3, 00:18:43.762 "num_base_bdevs_operational": 3, 00:18:43.762 "base_bdevs_list": [ 00:18:43.762 { 00:18:43.762 "name": "spare", 00:18:43.762 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:43.762 "is_configured": true, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": null, 00:18:43.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.762 "is_configured": false, 00:18:43.762 "data_offset": 0, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": "BaseBdev3", 00:18:43.762 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:43.762 "is_configured": true, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": "BaseBdev4", 00:18:43.762 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:43.762 "is_configured": true, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 } 00:18:43.762 ] 00:18:43.762 }' 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.762 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.762 76.50 IOPS, 229.50 MiB/s [2024-11-20T11:30:51.608Z] 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.762 "name": "raid_bdev1", 00:18:43.762 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:43.762 "strip_size_kb": 0, 00:18:43.762 "state": "online", 00:18:43.762 "raid_level": "raid1", 00:18:43.762 "superblock": true, 00:18:43.762 "num_base_bdevs": 4, 00:18:43.762 "num_base_bdevs_discovered": 3, 00:18:43.762 "num_base_bdevs_operational": 3, 00:18:43.762 "base_bdevs_list": [ 00:18:43.762 { 00:18:43.762 "name": "spare", 00:18:43.762 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:43.762 "is_configured": true, 00:18:43.762 "data_offset": 2048, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": null, 00:18:43.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.762 "is_configured": false, 00:18:43.762 "data_offset": 0, 00:18:43.762 "data_size": 63488 00:18:43.762 }, 00:18:43.762 { 00:18:43.762 "name": "BaseBdev3", 00:18:43.762 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:43.762 "is_configured": true, 00:18:43.763 "data_offset": 2048, 00:18:43.763 "data_size": 63488 00:18:43.763 }, 00:18:43.763 { 00:18:43.763 "name": "BaseBdev4", 00:18:43.763 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:43.763 "is_configured": true, 00:18:43.763 "data_offset": 2048, 00:18:43.763 "data_size": 63488 00:18:43.763 } 00:18:43.763 ] 00:18:43.763 }' 00:18:43.763 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.763 11:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.330 [2024-11-20 11:30:52.043023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.330 [2024-11-20 11:30:52.043063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.330 00:18:44.330 Latency(us) 00:18:44.330 [2024-11-20T11:30:52.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.330 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:44.330 raid_bdev1 : 8.56 73.71 221.13 0.00 0.00 18700.21 309.06 122969.37 00:18:44.330 [2024-11-20T11:30:52.176Z] =================================================================================================================== 00:18:44.330 [2024-11-20T11:30:52.176Z] Total : 73.71 221.13 0.00 0.00 18700.21 309.06 122969.37 00:18:44.330 { 00:18:44.330 "results": [ 00:18:44.330 { 00:18:44.330 "job": "raid_bdev1", 00:18:44.330 "core_mask": "0x1", 00:18:44.330 "workload": "randrw", 00:18:44.330 "percentage": 50, 00:18:44.330 "status": "finished", 00:18:44.330 "queue_depth": 2, 00:18:44.330 "io_size": 3145728, 00:18:44.330 "runtime": 8.560505, 00:18:44.330 "iops": 73.71060468979341, 00:18:44.330 "mibps": 221.13181406938025, 00:18:44.330 "io_failed": 0, 00:18:44.330 "io_timeout": 0, 00:18:44.330 "avg_latency_us": 18700.213940354417, 00:18:44.330 "min_latency_us": 309.0618181818182, 00:18:44.330 "max_latency_us": 122969.36727272728 00:18:44.330 } 00:18:44.330 ], 00:18:44.330 "core_count": 1 00:18:44.330 } 00:18:44.330 [2024-11-20 11:30:52.147119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.330 [2024-11-20 11:30:52.147198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.330 [2024-11-20 11:30:52.147338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.330 [2024-11-20 11:30:52.147357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.330 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.588 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:44.847 /dev/nbd0 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.847 1+0 records in 00:18:44.847 1+0 records out 00:18:44.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636042 s, 6.4 MB/s 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.847 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:45.107 /dev/nbd1 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.107 1+0 records in 00:18:45.107 1+0 records out 00:18:45.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000969914 s, 4.2 MB/s 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.107 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:45.366 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:45.366 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.366 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:45.366 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.366 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:45.366 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.366 11:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.625 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:45.884 /dev/nbd1 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.884 1+0 records in 00:18:45.884 1+0 records out 00:18:45.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003998 s, 10.2 MB/s 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.884 11:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:46.451 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:46.451 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:46.451 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:46.451 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.451 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.452 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.452 [2024-11-20 11:30:54.294935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.452 [2024-11-20 11:30:54.295034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.452 [2024-11-20 11:30:54.295068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:46.452 [2024-11-20 11:30:54.295095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.710 [2024-11-20 11:30:54.298151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.711 [2024-11-20 11:30:54.298200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.711 [2024-11-20 11:30:54.298318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.711 [2024-11-20 11:30:54.298394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.711 [2024-11-20 11:30:54.298575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:46.711 [2024-11-20 11:30:54.298740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:46.711 spare 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.711 [2024-11-20 11:30:54.398886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:46.711 [2024-11-20 11:30:54.398938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:46.711 [2024-11-20 11:30:54.399347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:46.711 [2024-11-20 11:30:54.399589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:46.711 [2024-11-20 11:30:54.399611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:46.711 [2024-11-20 11:30:54.400236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.711 "name": "raid_bdev1", 00:18:46.711 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:46.711 "strip_size_kb": 0, 00:18:46.711 "state": "online", 00:18:46.711 "raid_level": "raid1", 00:18:46.711 "superblock": true, 00:18:46.711 "num_base_bdevs": 4, 00:18:46.711 "num_base_bdevs_discovered": 3, 00:18:46.711 "num_base_bdevs_operational": 3, 00:18:46.711 "base_bdevs_list": [ 00:18:46.711 { 00:18:46.711 "name": "spare", 00:18:46.711 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:46.711 "is_configured": true, 00:18:46.711 "data_offset": 2048, 00:18:46.711 "data_size": 63488 00:18:46.711 }, 00:18:46.711 { 00:18:46.711 "name": null, 00:18:46.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.711 "is_configured": false, 00:18:46.711 "data_offset": 2048, 00:18:46.711 "data_size": 63488 00:18:46.711 }, 00:18:46.711 { 00:18:46.711 "name": "BaseBdev3", 00:18:46.711 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:46.711 "is_configured": true, 00:18:46.711 "data_offset": 2048, 00:18:46.711 "data_size": 63488 00:18:46.711 }, 00:18:46.711 { 00:18:46.711 "name": "BaseBdev4", 00:18:46.711 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:46.711 "is_configured": true, 00:18:46.711 "data_offset": 2048, 00:18:46.711 "data_size": 63488 00:18:46.711 } 00:18:46.711 ] 00:18:46.711 }' 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.711 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.279 "name": "raid_bdev1", 00:18:47.279 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:47.279 "strip_size_kb": 0, 00:18:47.279 "state": "online", 00:18:47.279 "raid_level": "raid1", 00:18:47.279 "superblock": true, 00:18:47.279 "num_base_bdevs": 4, 00:18:47.279 "num_base_bdevs_discovered": 3, 00:18:47.279 "num_base_bdevs_operational": 3, 00:18:47.279 "base_bdevs_list": [ 00:18:47.279 { 00:18:47.279 "name": "spare", 00:18:47.279 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:47.279 "is_configured": true, 00:18:47.279 "data_offset": 2048, 00:18:47.279 "data_size": 63488 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "name": null, 00:18:47.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.279 "is_configured": false, 00:18:47.279 "data_offset": 2048, 00:18:47.279 "data_size": 63488 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "name": "BaseBdev3", 00:18:47.279 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:47.279 "is_configured": true, 00:18:47.279 "data_offset": 2048, 00:18:47.279 "data_size": 63488 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "name": "BaseBdev4", 00:18:47.279 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:47.279 "is_configured": true, 00:18:47.279 "data_offset": 2048, 00:18:47.279 "data_size": 63488 00:18:47.279 } 00:18:47.279 ] 00:18:47.279 }' 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.279 11:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 [2024-11-20 11:30:55.100530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.279 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.538 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.538 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.538 "name": "raid_bdev1", 00:18:47.538 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:47.538 "strip_size_kb": 0, 00:18:47.538 "state": "online", 00:18:47.538 "raid_level": "raid1", 00:18:47.538 "superblock": true, 00:18:47.538 "num_base_bdevs": 4, 00:18:47.538 "num_base_bdevs_discovered": 2, 00:18:47.538 "num_base_bdevs_operational": 2, 00:18:47.538 "base_bdevs_list": [ 00:18:47.538 { 00:18:47.538 "name": null, 00:18:47.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.538 "is_configured": false, 00:18:47.538 "data_offset": 0, 00:18:47.538 "data_size": 63488 00:18:47.538 }, 00:18:47.538 { 00:18:47.538 "name": null, 00:18:47.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.538 "is_configured": false, 00:18:47.538 "data_offset": 2048, 00:18:47.538 "data_size": 63488 00:18:47.538 }, 00:18:47.538 { 00:18:47.538 "name": "BaseBdev3", 00:18:47.538 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:47.538 "is_configured": true, 00:18:47.538 "data_offset": 2048, 00:18:47.538 "data_size": 63488 00:18:47.538 }, 00:18:47.538 { 00:18:47.538 "name": "BaseBdev4", 00:18:47.538 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:47.538 "is_configured": true, 00:18:47.538 "data_offset": 2048, 00:18:47.538 "data_size": 63488 00:18:47.538 } 00:18:47.538 ] 00:18:47.538 }' 00:18:47.538 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.538 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.798 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.798 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.798 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.798 [2024-11-20 11:30:55.624852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.798 [2024-11-20 11:30:55.625187] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:47.798 [2024-11-20 11:30:55.625232] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:47.798 [2024-11-20 11:30:55.625321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.140 [2024-11-20 11:30:55.643842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:48.140 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.140 11:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:48.140 [2024-11-20 11:30:55.646342] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.078 "name": "raid_bdev1", 00:18:49.078 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:49.078 "strip_size_kb": 0, 00:18:49.078 "state": "online", 00:18:49.078 "raid_level": "raid1", 00:18:49.078 "superblock": true, 00:18:49.078 "num_base_bdevs": 4, 00:18:49.078 "num_base_bdevs_discovered": 3, 00:18:49.078 "num_base_bdevs_operational": 3, 00:18:49.078 "process": { 00:18:49.078 "type": "rebuild", 00:18:49.078 "target": "spare", 00:18:49.078 "progress": { 00:18:49.078 "blocks": 20480, 00:18:49.078 "percent": 32 00:18:49.078 } 00:18:49.078 }, 00:18:49.078 "base_bdevs_list": [ 00:18:49.078 { 00:18:49.078 "name": "spare", 00:18:49.078 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:49.078 "is_configured": true, 00:18:49.078 "data_offset": 2048, 00:18:49.078 "data_size": 63488 00:18:49.078 }, 00:18:49.078 { 00:18:49.078 "name": null, 00:18:49.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.078 "is_configured": false, 00:18:49.078 "data_offset": 2048, 00:18:49.078 "data_size": 63488 00:18:49.078 }, 00:18:49.078 { 00:18:49.078 "name": "BaseBdev3", 00:18:49.078 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:49.078 "is_configured": true, 00:18:49.078 "data_offset": 2048, 00:18:49.078 "data_size": 63488 00:18:49.078 }, 00:18:49.078 { 00:18:49.078 "name": "BaseBdev4", 00:18:49.078 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:49.078 "is_configured": true, 00:18:49.078 "data_offset": 2048, 00:18:49.078 "data_size": 63488 00:18:49.078 } 00:18:49.078 ] 00:18:49.078 }' 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.078 [2024-11-20 11:30:56.815735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.078 [2024-11-20 11:30:56.855629] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.078 [2024-11-20 11:30:56.855730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.078 [2024-11-20 11:30:56.855765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.078 [2024-11-20 11:30:56.855779] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.078 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.337 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.337 "name": "raid_bdev1", 00:18:49.337 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:49.337 "strip_size_kb": 0, 00:18:49.337 "state": "online", 00:18:49.337 "raid_level": "raid1", 00:18:49.337 "superblock": true, 00:18:49.337 "num_base_bdevs": 4, 00:18:49.337 "num_base_bdevs_discovered": 2, 00:18:49.337 "num_base_bdevs_operational": 2, 00:18:49.337 "base_bdevs_list": [ 00:18:49.337 { 00:18:49.337 "name": null, 00:18:49.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.337 "is_configured": false, 00:18:49.337 "data_offset": 0, 00:18:49.337 "data_size": 63488 00:18:49.337 }, 00:18:49.337 { 00:18:49.337 "name": null, 00:18:49.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.337 "is_configured": false, 00:18:49.337 "data_offset": 2048, 00:18:49.337 "data_size": 63488 00:18:49.337 }, 00:18:49.337 { 00:18:49.337 "name": "BaseBdev3", 00:18:49.337 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:49.337 "is_configured": true, 00:18:49.337 "data_offset": 2048, 00:18:49.337 "data_size": 63488 00:18:49.337 }, 00:18:49.337 { 00:18:49.337 "name": "BaseBdev4", 00:18:49.337 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:49.337 "is_configured": true, 00:18:49.337 "data_offset": 2048, 00:18:49.337 "data_size": 63488 00:18:49.337 } 00:18:49.337 ] 00:18:49.337 }' 00:18:49.337 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.337 11:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.596 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.596 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.596 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.596 [2024-11-20 11:30:57.394811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.596 [2024-11-20 11:30:57.394890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.596 [2024-11-20 11:30:57.394931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:49.596 [2024-11-20 11:30:57.394947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.596 [2024-11-20 11:30:57.395559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.596 [2024-11-20 11:30:57.395593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.596 [2024-11-20 11:30:57.395735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.597 [2024-11-20 11:30:57.395757] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:49.597 [2024-11-20 11:30:57.395777] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:49.597 [2024-11-20 11:30:57.395806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.597 [2024-11-20 11:30:57.410381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:49.597 spare 00:18:49.597 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.597 11:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:49.597 [2024-11-20 11:30:57.412911] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.971 "name": "raid_bdev1", 00:18:50.971 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:50.971 "strip_size_kb": 0, 00:18:50.971 "state": "online", 00:18:50.971 "raid_level": "raid1", 00:18:50.971 "superblock": true, 00:18:50.971 "num_base_bdevs": 4, 00:18:50.971 "num_base_bdevs_discovered": 3, 00:18:50.971 "num_base_bdevs_operational": 3, 00:18:50.971 "process": { 00:18:50.971 "type": "rebuild", 00:18:50.971 "target": "spare", 00:18:50.971 "progress": { 00:18:50.971 "blocks": 20480, 00:18:50.971 "percent": 32 00:18:50.971 } 00:18:50.971 }, 00:18:50.971 "base_bdevs_list": [ 00:18:50.971 { 00:18:50.971 "name": "spare", 00:18:50.971 "uuid": "d46ff1d1-18fd-5a00-80df-1421c9586ffe", 00:18:50.971 "is_configured": true, 00:18:50.971 "data_offset": 2048, 00:18:50.971 "data_size": 63488 00:18:50.971 }, 00:18:50.971 { 00:18:50.971 "name": null, 00:18:50.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.971 "is_configured": false, 00:18:50.971 "data_offset": 2048, 00:18:50.971 "data_size": 63488 00:18:50.971 }, 00:18:50.971 { 00:18:50.971 "name": "BaseBdev3", 00:18:50.971 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:50.971 "is_configured": true, 00:18:50.971 "data_offset": 2048, 00:18:50.971 "data_size": 63488 00:18:50.971 }, 00:18:50.971 { 00:18:50.971 "name": "BaseBdev4", 00:18:50.971 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:50.971 "is_configured": true, 00:18:50.971 "data_offset": 2048, 00:18:50.971 "data_size": 63488 00:18:50.971 } 00:18:50.971 ] 00:18:50.971 }' 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.971 [2024-11-20 11:30:58.586273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.971 [2024-11-20 11:30:58.622180] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.971 [2024-11-20 11:30:58.622600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.971 [2024-11-20 11:30:58.622824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.971 [2024-11-20 11:30:58.622884] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.971 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.972 "name": "raid_bdev1", 00:18:50.972 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:50.972 "strip_size_kb": 0, 00:18:50.972 "state": "online", 00:18:50.972 "raid_level": "raid1", 00:18:50.972 "superblock": true, 00:18:50.972 "num_base_bdevs": 4, 00:18:50.972 "num_base_bdevs_discovered": 2, 00:18:50.972 "num_base_bdevs_operational": 2, 00:18:50.972 "base_bdevs_list": [ 00:18:50.972 { 00:18:50.972 "name": null, 00:18:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.972 "is_configured": false, 00:18:50.972 "data_offset": 0, 00:18:50.972 "data_size": 63488 00:18:50.972 }, 00:18:50.972 { 00:18:50.972 "name": null, 00:18:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.972 "is_configured": false, 00:18:50.972 "data_offset": 2048, 00:18:50.972 "data_size": 63488 00:18:50.972 }, 00:18:50.972 { 00:18:50.972 "name": "BaseBdev3", 00:18:50.972 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:50.972 "is_configured": true, 00:18:50.972 "data_offset": 2048, 00:18:50.972 "data_size": 63488 00:18:50.972 }, 00:18:50.972 { 00:18:50.972 "name": "BaseBdev4", 00:18:50.972 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:50.972 "is_configured": true, 00:18:50.972 "data_offset": 2048, 00:18:50.972 "data_size": 63488 00:18:50.972 } 00:18:50.972 ] 00:18:50.972 }' 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.972 11:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.537 "name": "raid_bdev1", 00:18:51.537 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:51.537 "strip_size_kb": 0, 00:18:51.537 "state": "online", 00:18:51.537 "raid_level": "raid1", 00:18:51.537 "superblock": true, 00:18:51.537 "num_base_bdevs": 4, 00:18:51.537 "num_base_bdevs_discovered": 2, 00:18:51.537 "num_base_bdevs_operational": 2, 00:18:51.537 "base_bdevs_list": [ 00:18:51.537 { 00:18:51.537 "name": null, 00:18:51.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.537 "is_configured": false, 00:18:51.537 "data_offset": 0, 00:18:51.537 "data_size": 63488 00:18:51.537 }, 00:18:51.537 { 00:18:51.537 "name": null, 00:18:51.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.537 "is_configured": false, 00:18:51.537 "data_offset": 2048, 00:18:51.537 "data_size": 63488 00:18:51.537 }, 00:18:51.537 { 00:18:51.537 "name": "BaseBdev3", 00:18:51.537 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:51.537 "is_configured": true, 00:18:51.537 "data_offset": 2048, 00:18:51.537 "data_size": 63488 00:18:51.537 }, 00:18:51.537 { 00:18:51.537 "name": "BaseBdev4", 00:18:51.537 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:51.537 "is_configured": true, 00:18:51.537 "data_offset": 2048, 00:18:51.537 "data_size": 63488 00:18:51.537 } 00:18:51.537 ] 00:18:51.537 }' 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.537 [2024-11-20 11:30:59.302672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.537 [2024-11-20 11:30:59.302756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.537 [2024-11-20 11:30:59.302797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:51.537 [2024-11-20 11:30:59.302816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.537 [2024-11-20 11:30:59.303447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.537 [2024-11-20 11:30:59.303497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.537 [2024-11-20 11:30:59.303645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:51.537 [2024-11-20 11:30:59.303680] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:51.537 [2024-11-20 11:30:59.303691] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:51.537 [2024-11-20 11:30:59.303709] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:51.537 BaseBdev1 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.537 11:30:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.540 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.821 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.821 "name": "raid_bdev1", 00:18:52.821 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:52.821 "strip_size_kb": 0, 00:18:52.821 "state": "online", 00:18:52.821 "raid_level": "raid1", 00:18:52.821 "superblock": true, 00:18:52.821 "num_base_bdevs": 4, 00:18:52.821 "num_base_bdevs_discovered": 2, 00:18:52.821 "num_base_bdevs_operational": 2, 00:18:52.821 "base_bdevs_list": [ 00:18:52.821 { 00:18:52.821 "name": null, 00:18:52.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.821 "is_configured": false, 00:18:52.821 "data_offset": 0, 00:18:52.821 "data_size": 63488 00:18:52.821 }, 00:18:52.821 { 00:18:52.821 "name": null, 00:18:52.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.821 "is_configured": false, 00:18:52.821 "data_offset": 2048, 00:18:52.821 "data_size": 63488 00:18:52.821 }, 00:18:52.821 { 00:18:52.821 "name": "BaseBdev3", 00:18:52.821 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:52.821 "is_configured": true, 00:18:52.821 "data_offset": 2048, 00:18:52.821 "data_size": 63488 00:18:52.821 }, 00:18:52.821 { 00:18:52.821 "name": "BaseBdev4", 00:18:52.821 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:52.821 "is_configured": true, 00:18:52.821 "data_offset": 2048, 00:18:52.821 "data_size": 63488 00:18:52.821 } 00:18:52.821 ] 00:18:52.821 }' 00:18:52.821 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.821 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.080 "name": "raid_bdev1", 00:18:53.080 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:53.080 "strip_size_kb": 0, 00:18:53.080 "state": "online", 00:18:53.080 "raid_level": "raid1", 00:18:53.080 "superblock": true, 00:18:53.080 "num_base_bdevs": 4, 00:18:53.080 "num_base_bdevs_discovered": 2, 00:18:53.080 "num_base_bdevs_operational": 2, 00:18:53.080 "base_bdevs_list": [ 00:18:53.080 { 00:18:53.080 "name": null, 00:18:53.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.080 "is_configured": false, 00:18:53.080 "data_offset": 0, 00:18:53.080 "data_size": 63488 00:18:53.080 }, 00:18:53.080 { 00:18:53.080 "name": null, 00:18:53.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.080 "is_configured": false, 00:18:53.080 "data_offset": 2048, 00:18:53.080 "data_size": 63488 00:18:53.080 }, 00:18:53.080 { 00:18:53.080 "name": "BaseBdev3", 00:18:53.080 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:53.080 "is_configured": true, 00:18:53.080 "data_offset": 2048, 00:18:53.080 "data_size": 63488 00:18:53.080 }, 00:18:53.080 { 00:18:53.080 "name": "BaseBdev4", 00:18:53.080 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:53.080 "is_configured": true, 00:18:53.080 "data_offset": 2048, 00:18:53.080 "data_size": 63488 00:18:53.080 } 00:18:53.080 ] 00:18:53.080 }' 00:18:53.080 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.341 [2024-11-20 11:31:00.992710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.341 [2024-11-20 11:31:00.993095] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:53.341 [2024-11-20 11:31:00.993151] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:53.341 request: 00:18:53.341 { 00:18:53.341 "base_bdev": "BaseBdev1", 00:18:53.341 "raid_bdev": "raid_bdev1", 00:18:53.341 "method": "bdev_raid_add_base_bdev", 00:18:53.341 "req_id": 1 00:18:53.341 } 00:18:53.341 Got JSON-RPC error response 00:18:53.341 response: 00:18:53.341 { 00:18:53.341 "code": -22, 00:18:53.341 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:53.341 } 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:53.341 11:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.341 11:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.341 11:31:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.341 11:31:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.277 "name": "raid_bdev1", 00:18:54.277 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:54.277 "strip_size_kb": 0, 00:18:54.277 "state": "online", 00:18:54.277 "raid_level": "raid1", 00:18:54.277 "superblock": true, 00:18:54.277 "num_base_bdevs": 4, 00:18:54.277 "num_base_bdevs_discovered": 2, 00:18:54.277 "num_base_bdevs_operational": 2, 00:18:54.277 "base_bdevs_list": [ 00:18:54.277 { 00:18:54.277 "name": null, 00:18:54.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.277 "is_configured": false, 00:18:54.277 "data_offset": 0, 00:18:54.277 "data_size": 63488 00:18:54.277 }, 00:18:54.277 { 00:18:54.277 "name": null, 00:18:54.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.277 "is_configured": false, 00:18:54.277 "data_offset": 2048, 00:18:54.277 "data_size": 63488 00:18:54.277 }, 00:18:54.277 { 00:18:54.277 "name": "BaseBdev3", 00:18:54.277 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:54.277 "is_configured": true, 00:18:54.277 "data_offset": 2048, 00:18:54.277 "data_size": 63488 00:18:54.277 }, 00:18:54.277 { 00:18:54.277 "name": "BaseBdev4", 00:18:54.277 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:54.277 "is_configured": true, 00:18:54.277 "data_offset": 2048, 00:18:54.277 "data_size": 63488 00:18:54.277 } 00:18:54.277 ] 00:18:54.277 }' 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.277 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.843 "name": "raid_bdev1", 00:18:54.843 "uuid": "bb2ba24e-75a3-42c8-a6ce-bae707cecbe8", 00:18:54.843 "strip_size_kb": 0, 00:18:54.843 "state": "online", 00:18:54.843 "raid_level": "raid1", 00:18:54.843 "superblock": true, 00:18:54.843 "num_base_bdevs": 4, 00:18:54.843 "num_base_bdevs_discovered": 2, 00:18:54.843 "num_base_bdevs_operational": 2, 00:18:54.843 "base_bdevs_list": [ 00:18:54.843 { 00:18:54.843 "name": null, 00:18:54.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.843 "is_configured": false, 00:18:54.843 "data_offset": 0, 00:18:54.843 "data_size": 63488 00:18:54.843 }, 00:18:54.843 { 00:18:54.843 "name": null, 00:18:54.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.843 "is_configured": false, 00:18:54.843 "data_offset": 2048, 00:18:54.843 "data_size": 63488 00:18:54.843 }, 00:18:54.843 { 00:18:54.843 "name": "BaseBdev3", 00:18:54.843 "uuid": "43f395e4-0e9a-5d87-b891-0e0b838956ea", 00:18:54.843 "is_configured": true, 00:18:54.843 "data_offset": 2048, 00:18:54.843 "data_size": 63488 00:18:54.843 }, 00:18:54.843 { 00:18:54.843 "name": "BaseBdev4", 00:18:54.843 "uuid": "b6df873a-0e8c-551f-a94d-786b118c6ea4", 00:18:54.843 "is_configured": true, 00:18:54.843 "data_offset": 2048, 00:18:54.843 "data_size": 63488 00:18:54.843 } 00:18:54.843 ] 00:18:54.843 }' 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.843 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.116 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.116 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79405 00:18:55.116 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79405 ']' 00:18:55.116 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79405 00:18:55.116 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:55.116 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.116 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79405 00:18:55.116 killing process with pid 79405 00:18:55.116 Received shutdown signal, test time was about 19.175760 seconds 00:18:55.116 00:18:55.116 Latency(us) 00:18:55.116 [2024-11-20T11:31:02.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.116 [2024-11-20T11:31:02.963Z] =================================================================================================================== 00:18:55.117 [2024-11-20T11:31:02.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.117 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.117 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.117 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79405' 00:18:55.117 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79405 00:18:55.117 11:31:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79405 00:18:55.117 [2024-11-20 11:31:02.742240] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.117 [2024-11-20 11:31:02.742397] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.117 [2024-11-20 11:31:02.742508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.117 [2024-11-20 11:31:02.742526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:55.385 [2024-11-20 11:31:03.130402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.796 11:31:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:56.796 00:18:56.796 real 0m22.813s 00:18:56.796 user 0m30.903s 00:18:56.796 sys 0m2.386s 00:18:56.796 11:31:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.796 11:31:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.796 ************************************ 00:18:56.796 END TEST raid_rebuild_test_sb_io 00:18:56.796 ************************************ 00:18:56.796 11:31:04 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:56.796 11:31:04 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:56.796 11:31:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:56.796 11:31:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.796 11:31:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.796 ************************************ 00:18:56.796 START TEST raid5f_state_function_test 00:18:56.796 ************************************ 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:56.796 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80134 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80134' 00:18:56.797 Process raid pid: 80134 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80134 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80134 ']' 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.797 11:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.797 [2024-11-20 11:31:04.392839] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:18:56.797 [2024-11-20 11:31:04.393279] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.797 [2024-11-20 11:31:04.588350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.055 [2024-11-20 11:31:04.746327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.316 [2024-11-20 11:31:04.972076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.316 [2024-11-20 11:31:04.972137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.575 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.575 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:57.575 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:57.575 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.575 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.575 [2024-11-20 11:31:05.393945] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.575 [2024-11-20 11:31:05.394031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.575 [2024-11-20 11:31:05.394049] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.575 [2024-11-20 11:31:05.394066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.576 [2024-11-20 11:31:05.394076] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.576 [2024-11-20 11:31:05.394101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.576 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.835 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.835 "name": "Existed_Raid", 00:18:57.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.835 "strip_size_kb": 64, 00:18:57.835 "state": "configuring", 00:18:57.835 "raid_level": "raid5f", 00:18:57.835 "superblock": false, 00:18:57.835 "num_base_bdevs": 3, 00:18:57.835 "num_base_bdevs_discovered": 0, 00:18:57.835 "num_base_bdevs_operational": 3, 00:18:57.835 "base_bdevs_list": [ 00:18:57.835 { 00:18:57.835 "name": "BaseBdev1", 00:18:57.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.835 "is_configured": false, 00:18:57.835 "data_offset": 0, 00:18:57.835 "data_size": 0 00:18:57.835 }, 00:18:57.835 { 00:18:57.835 "name": "BaseBdev2", 00:18:57.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.835 "is_configured": false, 00:18:57.835 "data_offset": 0, 00:18:57.835 "data_size": 0 00:18:57.835 }, 00:18:57.835 { 00:18:57.835 "name": "BaseBdev3", 00:18:57.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.835 "is_configured": false, 00:18:57.835 "data_offset": 0, 00:18:57.835 "data_size": 0 00:18:57.835 } 00:18:57.835 ] 00:18:57.835 }' 00:18:57.835 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.835 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 [2024-11-20 11:31:05.890047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.094 [2024-11-20 11:31:05.890104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.094 [2024-11-20 11:31:05.898041] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.094 [2024-11-20 11:31:05.898109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.094 [2024-11-20 11:31:05.898128] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.094 [2024-11-20 11:31:05.898144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.094 [2024-11-20 11:31:05.898154] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.094 [2024-11-20 11:31:05.898168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.094 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 [2024-11-20 11:31:05.943418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.353 BaseBdev1 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 [ 00:18:58.353 { 00:18:58.353 "name": "BaseBdev1", 00:18:58.353 "aliases": [ 00:18:58.353 "2da75332-7756-418a-9de7-b6fd8fb8295d" 00:18:58.353 ], 00:18:58.353 "product_name": "Malloc disk", 00:18:58.353 "block_size": 512, 00:18:58.353 "num_blocks": 65536, 00:18:58.353 "uuid": "2da75332-7756-418a-9de7-b6fd8fb8295d", 00:18:58.353 "assigned_rate_limits": { 00:18:58.353 "rw_ios_per_sec": 0, 00:18:58.353 "rw_mbytes_per_sec": 0, 00:18:58.353 "r_mbytes_per_sec": 0, 00:18:58.353 "w_mbytes_per_sec": 0 00:18:58.353 }, 00:18:58.353 "claimed": true, 00:18:58.353 "claim_type": "exclusive_write", 00:18:58.353 "zoned": false, 00:18:58.353 "supported_io_types": { 00:18:58.353 "read": true, 00:18:58.353 "write": true, 00:18:58.353 "unmap": true, 00:18:58.353 "flush": true, 00:18:58.353 "reset": true, 00:18:58.353 "nvme_admin": false, 00:18:58.353 "nvme_io": false, 00:18:58.353 "nvme_io_md": false, 00:18:58.353 "write_zeroes": true, 00:18:58.353 "zcopy": true, 00:18:58.353 "get_zone_info": false, 00:18:58.353 "zone_management": false, 00:18:58.353 "zone_append": false, 00:18:58.353 "compare": false, 00:18:58.353 "compare_and_write": false, 00:18:58.353 "abort": true, 00:18:58.353 "seek_hole": false, 00:18:58.353 "seek_data": false, 00:18:58.353 "copy": true, 00:18:58.353 "nvme_iov_md": false 00:18:58.353 }, 00:18:58.353 "memory_domains": [ 00:18:58.353 { 00:18:58.353 "dma_device_id": "system", 00:18:58.353 "dma_device_type": 1 00:18:58.353 }, 00:18:58.353 { 00:18:58.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.353 "dma_device_type": 2 00:18:58.353 } 00:18:58.353 ], 00:18:58.353 "driver_specific": {} 00:18:58.353 } 00:18:58.353 ] 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.353 11:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.353 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.353 "name": "Existed_Raid", 00:18:58.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.353 "strip_size_kb": 64, 00:18:58.353 "state": "configuring", 00:18:58.353 "raid_level": "raid5f", 00:18:58.353 "superblock": false, 00:18:58.353 "num_base_bdevs": 3, 00:18:58.353 "num_base_bdevs_discovered": 1, 00:18:58.353 "num_base_bdevs_operational": 3, 00:18:58.353 "base_bdevs_list": [ 00:18:58.353 { 00:18:58.353 "name": "BaseBdev1", 00:18:58.353 "uuid": "2da75332-7756-418a-9de7-b6fd8fb8295d", 00:18:58.353 "is_configured": true, 00:18:58.353 "data_offset": 0, 00:18:58.353 "data_size": 65536 00:18:58.353 }, 00:18:58.353 { 00:18:58.353 "name": "BaseBdev2", 00:18:58.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.353 "is_configured": false, 00:18:58.353 "data_offset": 0, 00:18:58.353 "data_size": 0 00:18:58.353 }, 00:18:58.353 { 00:18:58.353 "name": "BaseBdev3", 00:18:58.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.353 "is_configured": false, 00:18:58.353 "data_offset": 0, 00:18:58.353 "data_size": 0 00:18:58.353 } 00:18:58.353 ] 00:18:58.353 }' 00:18:58.353 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.354 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.921 [2024-11-20 11:31:06.491683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:58.921 [2024-11-20 11:31:06.491747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.921 [2024-11-20 11:31:06.499788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.921 [2024-11-20 11:31:06.502506] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.921 [2024-11-20 11:31:06.502567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.921 [2024-11-20 11:31:06.502585] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.921 [2024-11-20 11:31:06.502600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.921 "name": "Existed_Raid", 00:18:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.921 "strip_size_kb": 64, 00:18:58.921 "state": "configuring", 00:18:58.921 "raid_level": "raid5f", 00:18:58.921 "superblock": false, 00:18:58.921 "num_base_bdevs": 3, 00:18:58.921 "num_base_bdevs_discovered": 1, 00:18:58.921 "num_base_bdevs_operational": 3, 00:18:58.921 "base_bdevs_list": [ 00:18:58.921 { 00:18:58.921 "name": "BaseBdev1", 00:18:58.921 "uuid": "2da75332-7756-418a-9de7-b6fd8fb8295d", 00:18:58.921 "is_configured": true, 00:18:58.921 "data_offset": 0, 00:18:58.921 "data_size": 65536 00:18:58.921 }, 00:18:58.921 { 00:18:58.921 "name": "BaseBdev2", 00:18:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.921 "is_configured": false, 00:18:58.921 "data_offset": 0, 00:18:58.921 "data_size": 0 00:18:58.921 }, 00:18:58.921 { 00:18:58.921 "name": "BaseBdev3", 00:18:58.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.921 "is_configured": false, 00:18:58.921 "data_offset": 0, 00:18:58.921 "data_size": 0 00:18:58.921 } 00:18:58.921 ] 00:18:58.921 }' 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.921 11:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.180 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:59.180 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.180 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.440 [2024-11-20 11:31:07.047123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.440 BaseBdev2 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.440 [ 00:18:59.440 { 00:18:59.440 "name": "BaseBdev2", 00:18:59.440 "aliases": [ 00:18:59.440 "473bb672-0d7f-4500-8986-1c00ef84dcef" 00:18:59.440 ], 00:18:59.440 "product_name": "Malloc disk", 00:18:59.440 "block_size": 512, 00:18:59.440 "num_blocks": 65536, 00:18:59.440 "uuid": "473bb672-0d7f-4500-8986-1c00ef84dcef", 00:18:59.440 "assigned_rate_limits": { 00:18:59.440 "rw_ios_per_sec": 0, 00:18:59.440 "rw_mbytes_per_sec": 0, 00:18:59.440 "r_mbytes_per_sec": 0, 00:18:59.440 "w_mbytes_per_sec": 0 00:18:59.440 }, 00:18:59.440 "claimed": true, 00:18:59.440 "claim_type": "exclusive_write", 00:18:59.440 "zoned": false, 00:18:59.440 "supported_io_types": { 00:18:59.440 "read": true, 00:18:59.440 "write": true, 00:18:59.440 "unmap": true, 00:18:59.440 "flush": true, 00:18:59.440 "reset": true, 00:18:59.440 "nvme_admin": false, 00:18:59.440 "nvme_io": false, 00:18:59.440 "nvme_io_md": false, 00:18:59.440 "write_zeroes": true, 00:18:59.440 "zcopy": true, 00:18:59.440 "get_zone_info": false, 00:18:59.440 "zone_management": false, 00:18:59.440 "zone_append": false, 00:18:59.440 "compare": false, 00:18:59.440 "compare_and_write": false, 00:18:59.440 "abort": true, 00:18:59.440 "seek_hole": false, 00:18:59.440 "seek_data": false, 00:18:59.440 "copy": true, 00:18:59.440 "nvme_iov_md": false 00:18:59.440 }, 00:18:59.440 "memory_domains": [ 00:18:59.440 { 00:18:59.440 "dma_device_id": "system", 00:18:59.440 "dma_device_type": 1 00:18:59.440 }, 00:18:59.440 { 00:18:59.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.440 "dma_device_type": 2 00:18:59.440 } 00:18:59.440 ], 00:18:59.440 "driver_specific": {} 00:18:59.440 } 00:18:59.440 ] 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.440 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.440 "name": "Existed_Raid", 00:18:59.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.440 "strip_size_kb": 64, 00:18:59.440 "state": "configuring", 00:18:59.440 "raid_level": "raid5f", 00:18:59.440 "superblock": false, 00:18:59.440 "num_base_bdevs": 3, 00:18:59.440 "num_base_bdevs_discovered": 2, 00:18:59.440 "num_base_bdevs_operational": 3, 00:18:59.440 "base_bdevs_list": [ 00:18:59.440 { 00:18:59.440 "name": "BaseBdev1", 00:18:59.440 "uuid": "2da75332-7756-418a-9de7-b6fd8fb8295d", 00:18:59.440 "is_configured": true, 00:18:59.440 "data_offset": 0, 00:18:59.440 "data_size": 65536 00:18:59.440 }, 00:18:59.440 { 00:18:59.440 "name": "BaseBdev2", 00:18:59.440 "uuid": "473bb672-0d7f-4500-8986-1c00ef84dcef", 00:18:59.440 "is_configured": true, 00:18:59.440 "data_offset": 0, 00:18:59.440 "data_size": 65536 00:18:59.440 }, 00:18:59.440 { 00:18:59.440 "name": "BaseBdev3", 00:18:59.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.440 "is_configured": false, 00:18:59.440 "data_offset": 0, 00:18:59.440 "data_size": 0 00:18:59.441 } 00:18:59.441 ] 00:18:59.441 }' 00:18:59.441 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.441 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.009 [2024-11-20 11:31:07.606609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.009 [2024-11-20 11:31:07.607057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:00.009 [2024-11-20 11:31:07.607090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:00.009 [2024-11-20 11:31:07.607435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:00.009 [2024-11-20 11:31:07.613009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:00.009 [2024-11-20 11:31:07.613037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:00.009 [2024-11-20 11:31:07.613375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.009 BaseBdev3 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.009 [ 00:19:00.009 { 00:19:00.009 "name": "BaseBdev3", 00:19:00.009 "aliases": [ 00:19:00.009 "46fdbb42-372f-48bb-82cf-21ed3ccebc5d" 00:19:00.009 ], 00:19:00.009 "product_name": "Malloc disk", 00:19:00.009 "block_size": 512, 00:19:00.009 "num_blocks": 65536, 00:19:00.009 "uuid": "46fdbb42-372f-48bb-82cf-21ed3ccebc5d", 00:19:00.009 "assigned_rate_limits": { 00:19:00.009 "rw_ios_per_sec": 0, 00:19:00.009 "rw_mbytes_per_sec": 0, 00:19:00.009 "r_mbytes_per_sec": 0, 00:19:00.009 "w_mbytes_per_sec": 0 00:19:00.009 }, 00:19:00.009 "claimed": true, 00:19:00.009 "claim_type": "exclusive_write", 00:19:00.009 "zoned": false, 00:19:00.009 "supported_io_types": { 00:19:00.009 "read": true, 00:19:00.009 "write": true, 00:19:00.009 "unmap": true, 00:19:00.009 "flush": true, 00:19:00.009 "reset": true, 00:19:00.009 "nvme_admin": false, 00:19:00.009 "nvme_io": false, 00:19:00.009 "nvme_io_md": false, 00:19:00.009 "write_zeroes": true, 00:19:00.009 "zcopy": true, 00:19:00.009 "get_zone_info": false, 00:19:00.009 "zone_management": false, 00:19:00.009 "zone_append": false, 00:19:00.009 "compare": false, 00:19:00.009 "compare_and_write": false, 00:19:00.009 "abort": true, 00:19:00.009 "seek_hole": false, 00:19:00.009 "seek_data": false, 00:19:00.009 "copy": true, 00:19:00.009 "nvme_iov_md": false 00:19:00.009 }, 00:19:00.009 "memory_domains": [ 00:19:00.009 { 00:19:00.009 "dma_device_id": "system", 00:19:00.009 "dma_device_type": 1 00:19:00.009 }, 00:19:00.009 { 00:19:00.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.009 "dma_device_type": 2 00:19:00.009 } 00:19:00.009 ], 00:19:00.009 "driver_specific": {} 00:19:00.009 } 00:19:00.009 ] 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.009 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.009 "name": "Existed_Raid", 00:19:00.009 "uuid": "528c6978-8571-4425-a2d6-58533b56b876", 00:19:00.009 "strip_size_kb": 64, 00:19:00.009 "state": "online", 00:19:00.009 "raid_level": "raid5f", 00:19:00.009 "superblock": false, 00:19:00.009 "num_base_bdevs": 3, 00:19:00.009 "num_base_bdevs_discovered": 3, 00:19:00.009 "num_base_bdevs_operational": 3, 00:19:00.009 "base_bdevs_list": [ 00:19:00.009 { 00:19:00.009 "name": "BaseBdev1", 00:19:00.009 "uuid": "2da75332-7756-418a-9de7-b6fd8fb8295d", 00:19:00.009 "is_configured": true, 00:19:00.009 "data_offset": 0, 00:19:00.009 "data_size": 65536 00:19:00.009 }, 00:19:00.009 { 00:19:00.009 "name": "BaseBdev2", 00:19:00.009 "uuid": "473bb672-0d7f-4500-8986-1c00ef84dcef", 00:19:00.009 "is_configured": true, 00:19:00.009 "data_offset": 0, 00:19:00.009 "data_size": 65536 00:19:00.009 }, 00:19:00.009 { 00:19:00.010 "name": "BaseBdev3", 00:19:00.010 "uuid": "46fdbb42-372f-48bb-82cf-21ed3ccebc5d", 00:19:00.010 "is_configured": true, 00:19:00.010 "data_offset": 0, 00:19:00.010 "data_size": 65536 00:19:00.010 } 00:19:00.010 ] 00:19:00.010 }' 00:19:00.010 11:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.010 11:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.576 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:00.577 [2024-11-20 11:31:08.148687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:00.577 "name": "Existed_Raid", 00:19:00.577 "aliases": [ 00:19:00.577 "528c6978-8571-4425-a2d6-58533b56b876" 00:19:00.577 ], 00:19:00.577 "product_name": "Raid Volume", 00:19:00.577 "block_size": 512, 00:19:00.577 "num_blocks": 131072, 00:19:00.577 "uuid": "528c6978-8571-4425-a2d6-58533b56b876", 00:19:00.577 "assigned_rate_limits": { 00:19:00.577 "rw_ios_per_sec": 0, 00:19:00.577 "rw_mbytes_per_sec": 0, 00:19:00.577 "r_mbytes_per_sec": 0, 00:19:00.577 "w_mbytes_per_sec": 0 00:19:00.577 }, 00:19:00.577 "claimed": false, 00:19:00.577 "zoned": false, 00:19:00.577 "supported_io_types": { 00:19:00.577 "read": true, 00:19:00.577 "write": true, 00:19:00.577 "unmap": false, 00:19:00.577 "flush": false, 00:19:00.577 "reset": true, 00:19:00.577 "nvme_admin": false, 00:19:00.577 "nvme_io": false, 00:19:00.577 "nvme_io_md": false, 00:19:00.577 "write_zeroes": true, 00:19:00.577 "zcopy": false, 00:19:00.577 "get_zone_info": false, 00:19:00.577 "zone_management": false, 00:19:00.577 "zone_append": false, 00:19:00.577 "compare": false, 00:19:00.577 "compare_and_write": false, 00:19:00.577 "abort": false, 00:19:00.577 "seek_hole": false, 00:19:00.577 "seek_data": false, 00:19:00.577 "copy": false, 00:19:00.577 "nvme_iov_md": false 00:19:00.577 }, 00:19:00.577 "driver_specific": { 00:19:00.577 "raid": { 00:19:00.577 "uuid": "528c6978-8571-4425-a2d6-58533b56b876", 00:19:00.577 "strip_size_kb": 64, 00:19:00.577 "state": "online", 00:19:00.577 "raid_level": "raid5f", 00:19:00.577 "superblock": false, 00:19:00.577 "num_base_bdevs": 3, 00:19:00.577 "num_base_bdevs_discovered": 3, 00:19:00.577 "num_base_bdevs_operational": 3, 00:19:00.577 "base_bdevs_list": [ 00:19:00.577 { 00:19:00.577 "name": "BaseBdev1", 00:19:00.577 "uuid": "2da75332-7756-418a-9de7-b6fd8fb8295d", 00:19:00.577 "is_configured": true, 00:19:00.577 "data_offset": 0, 00:19:00.577 "data_size": 65536 00:19:00.577 }, 00:19:00.577 { 00:19:00.577 "name": "BaseBdev2", 00:19:00.577 "uuid": "473bb672-0d7f-4500-8986-1c00ef84dcef", 00:19:00.577 "is_configured": true, 00:19:00.577 "data_offset": 0, 00:19:00.577 "data_size": 65536 00:19:00.577 }, 00:19:00.577 { 00:19:00.577 "name": "BaseBdev3", 00:19:00.577 "uuid": "46fdbb42-372f-48bb-82cf-21ed3ccebc5d", 00:19:00.577 "is_configured": true, 00:19:00.577 "data_offset": 0, 00:19:00.577 "data_size": 65536 00:19:00.577 } 00:19:00.577 ] 00:19:00.577 } 00:19:00.577 } 00:19:00.577 }' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:00.577 BaseBdev2 00:19:00.577 BaseBdev3' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.577 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.837 [2024-11-20 11:31:08.460677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.837 "name": "Existed_Raid", 00:19:00.837 "uuid": "528c6978-8571-4425-a2d6-58533b56b876", 00:19:00.837 "strip_size_kb": 64, 00:19:00.837 "state": "online", 00:19:00.837 "raid_level": "raid5f", 00:19:00.837 "superblock": false, 00:19:00.837 "num_base_bdevs": 3, 00:19:00.837 "num_base_bdevs_discovered": 2, 00:19:00.837 "num_base_bdevs_operational": 2, 00:19:00.837 "base_bdevs_list": [ 00:19:00.837 { 00:19:00.837 "name": null, 00:19:00.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.837 "is_configured": false, 00:19:00.837 "data_offset": 0, 00:19:00.837 "data_size": 65536 00:19:00.837 }, 00:19:00.837 { 00:19:00.837 "name": "BaseBdev2", 00:19:00.837 "uuid": "473bb672-0d7f-4500-8986-1c00ef84dcef", 00:19:00.837 "is_configured": true, 00:19:00.837 "data_offset": 0, 00:19:00.837 "data_size": 65536 00:19:00.837 }, 00:19:00.837 { 00:19:00.837 "name": "BaseBdev3", 00:19:00.837 "uuid": "46fdbb42-372f-48bb-82cf-21ed3ccebc5d", 00:19:00.837 "is_configured": true, 00:19:00.837 "data_offset": 0, 00:19:00.837 "data_size": 65536 00:19:00.837 } 00:19:00.837 ] 00:19:00.837 }' 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.837 11:31:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 [2024-11-20 11:31:09.123052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:01.405 [2024-11-20 11:31:09.123549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.405 [2024-11-20 11:31:09.210985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.666 [2024-11-20 11:31:09.275077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:01.666 [2024-11-20 11:31:09.275521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:01.666 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.667 BaseBdev2 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.667 [ 00:19:01.667 { 00:19:01.667 "name": "BaseBdev2", 00:19:01.667 "aliases": [ 00:19:01.667 "733df889-21ac-42df-9d1a-9e3962842be5" 00:19:01.667 ], 00:19:01.667 "product_name": "Malloc disk", 00:19:01.667 "block_size": 512, 00:19:01.667 "num_blocks": 65536, 00:19:01.667 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:01.667 "assigned_rate_limits": { 00:19:01.667 "rw_ios_per_sec": 0, 00:19:01.667 "rw_mbytes_per_sec": 0, 00:19:01.667 "r_mbytes_per_sec": 0, 00:19:01.667 "w_mbytes_per_sec": 0 00:19:01.667 }, 00:19:01.667 "claimed": false, 00:19:01.667 "zoned": false, 00:19:01.667 "supported_io_types": { 00:19:01.667 "read": true, 00:19:01.667 "write": true, 00:19:01.667 "unmap": true, 00:19:01.667 "flush": true, 00:19:01.667 "reset": true, 00:19:01.667 "nvme_admin": false, 00:19:01.667 "nvme_io": false, 00:19:01.667 "nvme_io_md": false, 00:19:01.667 "write_zeroes": true, 00:19:01.667 "zcopy": true, 00:19:01.667 "get_zone_info": false, 00:19:01.667 "zone_management": false, 00:19:01.667 "zone_append": false, 00:19:01.667 "compare": false, 00:19:01.667 "compare_and_write": false, 00:19:01.667 "abort": true, 00:19:01.667 "seek_hole": false, 00:19:01.667 "seek_data": false, 00:19:01.667 "copy": true, 00:19:01.667 "nvme_iov_md": false 00:19:01.667 }, 00:19:01.667 "memory_domains": [ 00:19:01.667 { 00:19:01.667 "dma_device_id": "system", 00:19:01.667 "dma_device_type": 1 00:19:01.667 }, 00:19:01.667 { 00:19:01.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.667 "dma_device_type": 2 00:19:01.667 } 00:19:01.667 ], 00:19:01.667 "driver_specific": {} 00:19:01.667 } 00:19:01.667 ] 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.667 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 BaseBdev3 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 [ 00:19:01.927 { 00:19:01.927 "name": "BaseBdev3", 00:19:01.927 "aliases": [ 00:19:01.927 "4ef85c16-0463-4912-832f-acf673eb7cc6" 00:19:01.927 ], 00:19:01.927 "product_name": "Malloc disk", 00:19:01.927 "block_size": 512, 00:19:01.927 "num_blocks": 65536, 00:19:01.927 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:01.927 "assigned_rate_limits": { 00:19:01.927 "rw_ios_per_sec": 0, 00:19:01.927 "rw_mbytes_per_sec": 0, 00:19:01.927 "r_mbytes_per_sec": 0, 00:19:01.927 "w_mbytes_per_sec": 0 00:19:01.927 }, 00:19:01.927 "claimed": false, 00:19:01.927 "zoned": false, 00:19:01.927 "supported_io_types": { 00:19:01.927 "read": true, 00:19:01.927 "write": true, 00:19:01.927 "unmap": true, 00:19:01.927 "flush": true, 00:19:01.927 "reset": true, 00:19:01.927 "nvme_admin": false, 00:19:01.927 "nvme_io": false, 00:19:01.927 "nvme_io_md": false, 00:19:01.927 "write_zeroes": true, 00:19:01.927 "zcopy": true, 00:19:01.927 "get_zone_info": false, 00:19:01.927 "zone_management": false, 00:19:01.927 "zone_append": false, 00:19:01.927 "compare": false, 00:19:01.927 "compare_and_write": false, 00:19:01.927 "abort": true, 00:19:01.927 "seek_hole": false, 00:19:01.927 "seek_data": false, 00:19:01.927 "copy": true, 00:19:01.927 "nvme_iov_md": false 00:19:01.927 }, 00:19:01.927 "memory_domains": [ 00:19:01.927 { 00:19:01.927 "dma_device_id": "system", 00:19:01.927 "dma_device_type": 1 00:19:01.927 }, 00:19:01.927 { 00:19:01.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.927 "dma_device_type": 2 00:19:01.927 } 00:19:01.927 ], 00:19:01.927 "driver_specific": {} 00:19:01.927 } 00:19:01.927 ] 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 [2024-11-20 11:31:09.574932] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.927 [2024-11-20 11:31:09.574993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.927 [2024-11-20 11:31:09.575031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.927 [2024-11-20 11:31:09.577725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.927 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.927 "name": "Existed_Raid", 00:19:01.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.927 "strip_size_kb": 64, 00:19:01.927 "state": "configuring", 00:19:01.927 "raid_level": "raid5f", 00:19:01.927 "superblock": false, 00:19:01.927 "num_base_bdevs": 3, 00:19:01.927 "num_base_bdevs_discovered": 2, 00:19:01.927 "num_base_bdevs_operational": 3, 00:19:01.927 "base_bdevs_list": [ 00:19:01.927 { 00:19:01.927 "name": "BaseBdev1", 00:19:01.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.927 "is_configured": false, 00:19:01.927 "data_offset": 0, 00:19:01.927 "data_size": 0 00:19:01.927 }, 00:19:01.927 { 00:19:01.927 "name": "BaseBdev2", 00:19:01.927 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:01.927 "is_configured": true, 00:19:01.927 "data_offset": 0, 00:19:01.927 "data_size": 65536 00:19:01.927 }, 00:19:01.927 { 00:19:01.927 "name": "BaseBdev3", 00:19:01.927 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:01.927 "is_configured": true, 00:19:01.927 "data_offset": 0, 00:19:01.927 "data_size": 65536 00:19:01.927 } 00:19:01.928 ] 00:19:01.928 }' 00:19:01.928 11:31:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.928 11:31:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.496 [2024-11-20 11:31:10.091187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.496 "name": "Existed_Raid", 00:19:02.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.496 "strip_size_kb": 64, 00:19:02.496 "state": "configuring", 00:19:02.496 "raid_level": "raid5f", 00:19:02.496 "superblock": false, 00:19:02.496 "num_base_bdevs": 3, 00:19:02.496 "num_base_bdevs_discovered": 1, 00:19:02.496 "num_base_bdevs_operational": 3, 00:19:02.496 "base_bdevs_list": [ 00:19:02.496 { 00:19:02.496 "name": "BaseBdev1", 00:19:02.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.496 "is_configured": false, 00:19:02.496 "data_offset": 0, 00:19:02.496 "data_size": 0 00:19:02.496 }, 00:19:02.496 { 00:19:02.496 "name": null, 00:19:02.496 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:02.496 "is_configured": false, 00:19:02.496 "data_offset": 0, 00:19:02.496 "data_size": 65536 00:19:02.496 }, 00:19:02.496 { 00:19:02.496 "name": "BaseBdev3", 00:19:02.496 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:02.496 "is_configured": true, 00:19:02.496 "data_offset": 0, 00:19:02.496 "data_size": 65536 00:19:02.496 } 00:19:02.496 ] 00:19:02.496 }' 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.496 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.074 [2024-11-20 11:31:10.721680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.074 BaseBdev1 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.074 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.075 [ 00:19:03.075 { 00:19:03.075 "name": "BaseBdev1", 00:19:03.075 "aliases": [ 00:19:03.075 "f64751d6-c64c-4087-9293-44b0793f49e1" 00:19:03.075 ], 00:19:03.075 "product_name": "Malloc disk", 00:19:03.075 "block_size": 512, 00:19:03.075 "num_blocks": 65536, 00:19:03.075 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:03.075 "assigned_rate_limits": { 00:19:03.075 "rw_ios_per_sec": 0, 00:19:03.075 "rw_mbytes_per_sec": 0, 00:19:03.075 "r_mbytes_per_sec": 0, 00:19:03.075 "w_mbytes_per_sec": 0 00:19:03.075 }, 00:19:03.075 "claimed": true, 00:19:03.075 "claim_type": "exclusive_write", 00:19:03.075 "zoned": false, 00:19:03.075 "supported_io_types": { 00:19:03.075 "read": true, 00:19:03.075 "write": true, 00:19:03.075 "unmap": true, 00:19:03.075 "flush": true, 00:19:03.075 "reset": true, 00:19:03.075 "nvme_admin": false, 00:19:03.075 "nvme_io": false, 00:19:03.075 "nvme_io_md": false, 00:19:03.075 "write_zeroes": true, 00:19:03.075 "zcopy": true, 00:19:03.075 "get_zone_info": false, 00:19:03.075 "zone_management": false, 00:19:03.075 "zone_append": false, 00:19:03.075 "compare": false, 00:19:03.075 "compare_and_write": false, 00:19:03.075 "abort": true, 00:19:03.075 "seek_hole": false, 00:19:03.075 "seek_data": false, 00:19:03.075 "copy": true, 00:19:03.075 "nvme_iov_md": false 00:19:03.075 }, 00:19:03.075 "memory_domains": [ 00:19:03.075 { 00:19:03.075 "dma_device_id": "system", 00:19:03.075 "dma_device_type": 1 00:19:03.075 }, 00:19:03.075 { 00:19:03.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.075 "dma_device_type": 2 00:19:03.075 } 00:19:03.075 ], 00:19:03.075 "driver_specific": {} 00:19:03.075 } 00:19:03.075 ] 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.075 "name": "Existed_Raid", 00:19:03.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.075 "strip_size_kb": 64, 00:19:03.075 "state": "configuring", 00:19:03.075 "raid_level": "raid5f", 00:19:03.075 "superblock": false, 00:19:03.075 "num_base_bdevs": 3, 00:19:03.075 "num_base_bdevs_discovered": 2, 00:19:03.075 "num_base_bdevs_operational": 3, 00:19:03.075 "base_bdevs_list": [ 00:19:03.075 { 00:19:03.075 "name": "BaseBdev1", 00:19:03.075 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:03.075 "is_configured": true, 00:19:03.075 "data_offset": 0, 00:19:03.075 "data_size": 65536 00:19:03.075 }, 00:19:03.075 { 00:19:03.075 "name": null, 00:19:03.075 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:03.075 "is_configured": false, 00:19:03.075 "data_offset": 0, 00:19:03.075 "data_size": 65536 00:19:03.075 }, 00:19:03.075 { 00:19:03.075 "name": "BaseBdev3", 00:19:03.075 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:03.075 "is_configured": true, 00:19:03.075 "data_offset": 0, 00:19:03.075 "data_size": 65536 00:19:03.075 } 00:19:03.075 ] 00:19:03.075 }' 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.075 11:31:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.643 [2024-11-20 11:31:11.333930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.643 "name": "Existed_Raid", 00:19:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.643 "strip_size_kb": 64, 00:19:03.643 "state": "configuring", 00:19:03.643 "raid_level": "raid5f", 00:19:03.643 "superblock": false, 00:19:03.643 "num_base_bdevs": 3, 00:19:03.643 "num_base_bdevs_discovered": 1, 00:19:03.643 "num_base_bdevs_operational": 3, 00:19:03.643 "base_bdevs_list": [ 00:19:03.643 { 00:19:03.643 "name": "BaseBdev1", 00:19:03.643 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:03.643 "is_configured": true, 00:19:03.643 "data_offset": 0, 00:19:03.643 "data_size": 65536 00:19:03.643 }, 00:19:03.643 { 00:19:03.643 "name": null, 00:19:03.643 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:03.643 "is_configured": false, 00:19:03.643 "data_offset": 0, 00:19:03.643 "data_size": 65536 00:19:03.643 }, 00:19:03.643 { 00:19:03.643 "name": null, 00:19:03.643 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:03.643 "is_configured": false, 00:19:03.643 "data_offset": 0, 00:19:03.643 "data_size": 65536 00:19:03.643 } 00:19:03.643 ] 00:19:03.643 }' 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.643 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.210 [2024-11-20 11:31:11.966314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:04.210 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.211 11:31:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.211 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.211 "name": "Existed_Raid", 00:19:04.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.211 "strip_size_kb": 64, 00:19:04.211 "state": "configuring", 00:19:04.211 "raid_level": "raid5f", 00:19:04.211 "superblock": false, 00:19:04.211 "num_base_bdevs": 3, 00:19:04.211 "num_base_bdevs_discovered": 2, 00:19:04.211 "num_base_bdevs_operational": 3, 00:19:04.211 "base_bdevs_list": [ 00:19:04.211 { 00:19:04.211 "name": "BaseBdev1", 00:19:04.211 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:04.211 "is_configured": true, 00:19:04.211 "data_offset": 0, 00:19:04.211 "data_size": 65536 00:19:04.211 }, 00:19:04.211 { 00:19:04.211 "name": null, 00:19:04.211 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:04.211 "is_configured": false, 00:19:04.211 "data_offset": 0, 00:19:04.211 "data_size": 65536 00:19:04.211 }, 00:19:04.211 { 00:19:04.211 "name": "BaseBdev3", 00:19:04.211 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:04.211 "is_configured": true, 00:19:04.211 "data_offset": 0, 00:19:04.211 "data_size": 65536 00:19:04.211 } 00:19:04.211 ] 00:19:04.211 }' 00:19:04.211 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.211 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.779 [2024-11-20 11:31:12.530480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.779 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.038 "name": "Existed_Raid", 00:19:05.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.038 "strip_size_kb": 64, 00:19:05.038 "state": "configuring", 00:19:05.038 "raid_level": "raid5f", 00:19:05.038 "superblock": false, 00:19:05.038 "num_base_bdevs": 3, 00:19:05.038 "num_base_bdevs_discovered": 1, 00:19:05.038 "num_base_bdevs_operational": 3, 00:19:05.038 "base_bdevs_list": [ 00:19:05.038 { 00:19:05.038 "name": null, 00:19:05.038 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:05.038 "is_configured": false, 00:19:05.038 "data_offset": 0, 00:19:05.038 "data_size": 65536 00:19:05.038 }, 00:19:05.038 { 00:19:05.038 "name": null, 00:19:05.038 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:05.038 "is_configured": false, 00:19:05.038 "data_offset": 0, 00:19:05.038 "data_size": 65536 00:19:05.038 }, 00:19:05.038 { 00:19:05.038 "name": "BaseBdev3", 00:19:05.038 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:05.038 "is_configured": true, 00:19:05.038 "data_offset": 0, 00:19:05.038 "data_size": 65536 00:19:05.038 } 00:19:05.038 ] 00:19:05.038 }' 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.038 11:31:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.605 [2024-11-20 11:31:13.222326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.605 "name": "Existed_Raid", 00:19:05.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.605 "strip_size_kb": 64, 00:19:05.605 "state": "configuring", 00:19:05.605 "raid_level": "raid5f", 00:19:05.605 "superblock": false, 00:19:05.605 "num_base_bdevs": 3, 00:19:05.605 "num_base_bdevs_discovered": 2, 00:19:05.605 "num_base_bdevs_operational": 3, 00:19:05.605 "base_bdevs_list": [ 00:19:05.605 { 00:19:05.605 "name": null, 00:19:05.605 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:05.605 "is_configured": false, 00:19:05.605 "data_offset": 0, 00:19:05.605 "data_size": 65536 00:19:05.605 }, 00:19:05.605 { 00:19:05.605 "name": "BaseBdev2", 00:19:05.605 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:05.605 "is_configured": true, 00:19:05.605 "data_offset": 0, 00:19:05.605 "data_size": 65536 00:19:05.605 }, 00:19:05.605 { 00:19:05.605 "name": "BaseBdev3", 00:19:05.605 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:05.605 "is_configured": true, 00:19:05.605 "data_offset": 0, 00:19:05.605 "data_size": 65536 00:19:05.605 } 00:19:05.605 ] 00:19:05.605 }' 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.605 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f64751d6-c64c-4087-9293-44b0793f49e1 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.171 [2024-11-20 11:31:13.848133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:06.171 [2024-11-20 11:31:13.848192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:06.171 [2024-11-20 11:31:13.848207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:06.171 [2024-11-20 11:31:13.848498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:06.171 [2024-11-20 11:31:13.854055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:06.171 [2024-11-20 11:31:13.854431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:06.171 [2024-11-20 11:31:13.855061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.171 NewBaseBdev 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:06.171 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.172 [ 00:19:06.172 { 00:19:06.172 "name": "NewBaseBdev", 00:19:06.172 "aliases": [ 00:19:06.172 "f64751d6-c64c-4087-9293-44b0793f49e1" 00:19:06.172 ], 00:19:06.172 "product_name": "Malloc disk", 00:19:06.172 "block_size": 512, 00:19:06.172 "num_blocks": 65536, 00:19:06.172 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:06.172 "assigned_rate_limits": { 00:19:06.172 "rw_ios_per_sec": 0, 00:19:06.172 "rw_mbytes_per_sec": 0, 00:19:06.172 "r_mbytes_per_sec": 0, 00:19:06.172 "w_mbytes_per_sec": 0 00:19:06.172 }, 00:19:06.172 "claimed": true, 00:19:06.172 "claim_type": "exclusive_write", 00:19:06.172 "zoned": false, 00:19:06.172 "supported_io_types": { 00:19:06.172 "read": true, 00:19:06.172 "write": true, 00:19:06.172 "unmap": true, 00:19:06.172 "flush": true, 00:19:06.172 "reset": true, 00:19:06.172 "nvme_admin": false, 00:19:06.172 "nvme_io": false, 00:19:06.172 "nvme_io_md": false, 00:19:06.172 "write_zeroes": true, 00:19:06.172 "zcopy": true, 00:19:06.172 "get_zone_info": false, 00:19:06.172 "zone_management": false, 00:19:06.172 "zone_append": false, 00:19:06.172 "compare": false, 00:19:06.172 "compare_and_write": false, 00:19:06.172 "abort": true, 00:19:06.172 "seek_hole": false, 00:19:06.172 "seek_data": false, 00:19:06.172 "copy": true, 00:19:06.172 "nvme_iov_md": false 00:19:06.172 }, 00:19:06.172 "memory_domains": [ 00:19:06.172 { 00:19:06.172 "dma_device_id": "system", 00:19:06.172 "dma_device_type": 1 00:19:06.172 }, 00:19:06.172 { 00:19:06.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.172 "dma_device_type": 2 00:19:06.172 } 00:19:06.172 ], 00:19:06.172 "driver_specific": {} 00:19:06.172 } 00:19:06.172 ] 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.172 "name": "Existed_Raid", 00:19:06.172 "uuid": "9abcf108-7021-4328-83ae-76809f51397d", 00:19:06.172 "strip_size_kb": 64, 00:19:06.172 "state": "online", 00:19:06.172 "raid_level": "raid5f", 00:19:06.172 "superblock": false, 00:19:06.172 "num_base_bdevs": 3, 00:19:06.172 "num_base_bdevs_discovered": 3, 00:19:06.172 "num_base_bdevs_operational": 3, 00:19:06.172 "base_bdevs_list": [ 00:19:06.172 { 00:19:06.172 "name": "NewBaseBdev", 00:19:06.172 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:06.172 "is_configured": true, 00:19:06.172 "data_offset": 0, 00:19:06.172 "data_size": 65536 00:19:06.172 }, 00:19:06.172 { 00:19:06.172 "name": "BaseBdev2", 00:19:06.172 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:06.172 "is_configured": true, 00:19:06.172 "data_offset": 0, 00:19:06.172 "data_size": 65536 00:19:06.172 }, 00:19:06.172 { 00:19:06.172 "name": "BaseBdev3", 00:19:06.172 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:06.172 "is_configured": true, 00:19:06.172 "data_offset": 0, 00:19:06.172 "data_size": 65536 00:19:06.172 } 00:19:06.172 ] 00:19:06.172 }' 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.172 11:31:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:06.740 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:06.741 [2024-11-20 11:31:14.393851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.741 "name": "Existed_Raid", 00:19:06.741 "aliases": [ 00:19:06.741 "9abcf108-7021-4328-83ae-76809f51397d" 00:19:06.741 ], 00:19:06.741 "product_name": "Raid Volume", 00:19:06.741 "block_size": 512, 00:19:06.741 "num_blocks": 131072, 00:19:06.741 "uuid": "9abcf108-7021-4328-83ae-76809f51397d", 00:19:06.741 "assigned_rate_limits": { 00:19:06.741 "rw_ios_per_sec": 0, 00:19:06.741 "rw_mbytes_per_sec": 0, 00:19:06.741 "r_mbytes_per_sec": 0, 00:19:06.741 "w_mbytes_per_sec": 0 00:19:06.741 }, 00:19:06.741 "claimed": false, 00:19:06.741 "zoned": false, 00:19:06.741 "supported_io_types": { 00:19:06.741 "read": true, 00:19:06.741 "write": true, 00:19:06.741 "unmap": false, 00:19:06.741 "flush": false, 00:19:06.741 "reset": true, 00:19:06.741 "nvme_admin": false, 00:19:06.741 "nvme_io": false, 00:19:06.741 "nvme_io_md": false, 00:19:06.741 "write_zeroes": true, 00:19:06.741 "zcopy": false, 00:19:06.741 "get_zone_info": false, 00:19:06.741 "zone_management": false, 00:19:06.741 "zone_append": false, 00:19:06.741 "compare": false, 00:19:06.741 "compare_and_write": false, 00:19:06.741 "abort": false, 00:19:06.741 "seek_hole": false, 00:19:06.741 "seek_data": false, 00:19:06.741 "copy": false, 00:19:06.741 "nvme_iov_md": false 00:19:06.741 }, 00:19:06.741 "driver_specific": { 00:19:06.741 "raid": { 00:19:06.741 "uuid": "9abcf108-7021-4328-83ae-76809f51397d", 00:19:06.741 "strip_size_kb": 64, 00:19:06.741 "state": "online", 00:19:06.741 "raid_level": "raid5f", 00:19:06.741 "superblock": false, 00:19:06.741 "num_base_bdevs": 3, 00:19:06.741 "num_base_bdevs_discovered": 3, 00:19:06.741 "num_base_bdevs_operational": 3, 00:19:06.741 "base_bdevs_list": [ 00:19:06.741 { 00:19:06.741 "name": "NewBaseBdev", 00:19:06.741 "uuid": "f64751d6-c64c-4087-9293-44b0793f49e1", 00:19:06.741 "is_configured": true, 00:19:06.741 "data_offset": 0, 00:19:06.741 "data_size": 65536 00:19:06.741 }, 00:19:06.741 { 00:19:06.741 "name": "BaseBdev2", 00:19:06.741 "uuid": "733df889-21ac-42df-9d1a-9e3962842be5", 00:19:06.741 "is_configured": true, 00:19:06.741 "data_offset": 0, 00:19:06.741 "data_size": 65536 00:19:06.741 }, 00:19:06.741 { 00:19:06.741 "name": "BaseBdev3", 00:19:06.741 "uuid": "4ef85c16-0463-4912-832f-acf673eb7cc6", 00:19:06.741 "is_configured": true, 00:19:06.741 "data_offset": 0, 00:19:06.741 "data_size": 65536 00:19:06.741 } 00:19:06.741 ] 00:19:06.741 } 00:19:06.741 } 00:19:06.741 }' 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:06.741 BaseBdev2 00:19:06.741 BaseBdev3' 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.741 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.003 [2024-11-20 11:31:14.745851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:07.003 [2024-11-20 11:31:14.745886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.003 [2024-11-20 11:31:14.746002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.003 [2024-11-20 11:31:14.746366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.003 [2024-11-20 11:31:14.746389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80134 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80134 ']' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80134 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80134 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80134' 00:19:07.003 killing process with pid 80134 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80134 00:19:07.003 11:31:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80134 00:19:07.003 [2024-11-20 11:31:14.784440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.262 [2024-11-20 11:31:15.061769] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:08.640 ************************************ 00:19:08.640 END TEST raid5f_state_function_test 00:19:08.640 ************************************ 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:08.640 00:19:08.640 real 0m11.827s 00:19:08.640 user 0m19.352s 00:19:08.640 sys 0m1.769s 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.640 11:31:16 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:19:08.640 11:31:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:08.640 11:31:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.640 11:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:08.640 ************************************ 00:19:08.640 START TEST raid5f_state_function_test_sb 00:19:08.640 ************************************ 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:08.640 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:08.641 Process raid pid: 80767 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80767 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80767' 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80767 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80767 ']' 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.641 11:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.641 [2024-11-20 11:31:16.260023] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:19:08.641 [2024-11-20 11:31:16.260191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.641 [2024-11-20 11:31:16.436678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.899 [2024-11-20 11:31:16.568121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.157 [2024-11-20 11:31:16.775800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.157 [2024-11-20 11:31:16.775874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.416 [2024-11-20 11:31:17.246788] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.416 [2024-11-20 11:31:17.246856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.416 [2024-11-20 11:31:17.246874] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.416 [2024-11-20 11:31:17.246890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.416 [2024-11-20 11:31:17.246900] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:09.416 [2024-11-20 11:31:17.246914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.416 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.674 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.674 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.674 "name": "Existed_Raid", 00:19:09.674 "uuid": "e960b3b4-69df-4b43-afb5-fb620bc8a010", 00:19:09.674 "strip_size_kb": 64, 00:19:09.674 "state": "configuring", 00:19:09.674 "raid_level": "raid5f", 00:19:09.674 "superblock": true, 00:19:09.674 "num_base_bdevs": 3, 00:19:09.674 "num_base_bdevs_discovered": 0, 00:19:09.674 "num_base_bdevs_operational": 3, 00:19:09.674 "base_bdevs_list": [ 00:19:09.674 { 00:19:09.674 "name": "BaseBdev1", 00:19:09.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.674 "is_configured": false, 00:19:09.674 "data_offset": 0, 00:19:09.674 "data_size": 0 00:19:09.674 }, 00:19:09.674 { 00:19:09.674 "name": "BaseBdev2", 00:19:09.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.674 "is_configured": false, 00:19:09.674 "data_offset": 0, 00:19:09.674 "data_size": 0 00:19:09.674 }, 00:19:09.674 { 00:19:09.674 "name": "BaseBdev3", 00:19:09.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.674 "is_configured": false, 00:19:09.674 "data_offset": 0, 00:19:09.674 "data_size": 0 00:19:09.674 } 00:19:09.674 ] 00:19:09.674 }' 00:19:09.674 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.674 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.934 [2024-11-20 11:31:17.742820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.934 [2024-11-20 11:31:17.742999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.934 [2024-11-20 11:31:17.750810] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.934 [2024-11-20 11:31:17.750865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.934 [2024-11-20 11:31:17.750881] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.934 [2024-11-20 11:31:17.750897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.934 [2024-11-20 11:31:17.750906] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:09.934 [2024-11-20 11:31:17.750920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.934 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.193 [2024-11-20 11:31:17.796535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.193 BaseBdev1 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.193 [ 00:19:10.193 { 00:19:10.193 "name": "BaseBdev1", 00:19:10.193 "aliases": [ 00:19:10.193 "96e77c2b-cd88-4b8c-ab17-6cc5a54a5c2d" 00:19:10.193 ], 00:19:10.193 "product_name": "Malloc disk", 00:19:10.193 "block_size": 512, 00:19:10.193 "num_blocks": 65536, 00:19:10.193 "uuid": "96e77c2b-cd88-4b8c-ab17-6cc5a54a5c2d", 00:19:10.193 "assigned_rate_limits": { 00:19:10.193 "rw_ios_per_sec": 0, 00:19:10.193 "rw_mbytes_per_sec": 0, 00:19:10.193 "r_mbytes_per_sec": 0, 00:19:10.193 "w_mbytes_per_sec": 0 00:19:10.193 }, 00:19:10.193 "claimed": true, 00:19:10.193 "claim_type": "exclusive_write", 00:19:10.193 "zoned": false, 00:19:10.193 "supported_io_types": { 00:19:10.193 "read": true, 00:19:10.193 "write": true, 00:19:10.193 "unmap": true, 00:19:10.193 "flush": true, 00:19:10.193 "reset": true, 00:19:10.193 "nvme_admin": false, 00:19:10.193 "nvme_io": false, 00:19:10.193 "nvme_io_md": false, 00:19:10.193 "write_zeroes": true, 00:19:10.193 "zcopy": true, 00:19:10.193 "get_zone_info": false, 00:19:10.193 "zone_management": false, 00:19:10.193 "zone_append": false, 00:19:10.193 "compare": false, 00:19:10.193 "compare_and_write": false, 00:19:10.193 "abort": true, 00:19:10.193 "seek_hole": false, 00:19:10.193 "seek_data": false, 00:19:10.193 "copy": true, 00:19:10.193 "nvme_iov_md": false 00:19:10.193 }, 00:19:10.193 "memory_domains": [ 00:19:10.193 { 00:19:10.193 "dma_device_id": "system", 00:19:10.193 "dma_device_type": 1 00:19:10.193 }, 00:19:10.193 { 00:19:10.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.193 "dma_device_type": 2 00:19:10.193 } 00:19:10.193 ], 00:19:10.193 "driver_specific": {} 00:19:10.193 } 00:19:10.193 ] 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.193 "name": "Existed_Raid", 00:19:10.193 "uuid": "a7346e34-3d87-4462-a082-7d7f7bfff288", 00:19:10.193 "strip_size_kb": 64, 00:19:10.193 "state": "configuring", 00:19:10.193 "raid_level": "raid5f", 00:19:10.193 "superblock": true, 00:19:10.193 "num_base_bdevs": 3, 00:19:10.193 "num_base_bdevs_discovered": 1, 00:19:10.193 "num_base_bdevs_operational": 3, 00:19:10.193 "base_bdevs_list": [ 00:19:10.193 { 00:19:10.193 "name": "BaseBdev1", 00:19:10.193 "uuid": "96e77c2b-cd88-4b8c-ab17-6cc5a54a5c2d", 00:19:10.193 "is_configured": true, 00:19:10.193 "data_offset": 2048, 00:19:10.193 "data_size": 63488 00:19:10.193 }, 00:19:10.193 { 00:19:10.193 "name": "BaseBdev2", 00:19:10.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.193 "is_configured": false, 00:19:10.193 "data_offset": 0, 00:19:10.193 "data_size": 0 00:19:10.193 }, 00:19:10.193 { 00:19:10.193 "name": "BaseBdev3", 00:19:10.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.193 "is_configured": false, 00:19:10.193 "data_offset": 0, 00:19:10.193 "data_size": 0 00:19:10.193 } 00:19:10.193 ] 00:19:10.193 }' 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.193 11:31:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.761 [2024-11-20 11:31:18.320748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:10.761 [2024-11-20 11:31:18.320827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.761 [2024-11-20 11:31:18.328794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.761 [2024-11-20 11:31:18.331300] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.761 [2024-11-20 11:31:18.331354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.761 [2024-11-20 11:31:18.331370] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.761 [2024-11-20 11:31:18.331385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.761 "name": "Existed_Raid", 00:19:10.761 "uuid": "43a0ef7d-02cf-4867-98d5-bf55ba3b2147", 00:19:10.761 "strip_size_kb": 64, 00:19:10.761 "state": "configuring", 00:19:10.761 "raid_level": "raid5f", 00:19:10.761 "superblock": true, 00:19:10.761 "num_base_bdevs": 3, 00:19:10.761 "num_base_bdevs_discovered": 1, 00:19:10.761 "num_base_bdevs_operational": 3, 00:19:10.761 "base_bdevs_list": [ 00:19:10.761 { 00:19:10.761 "name": "BaseBdev1", 00:19:10.761 "uuid": "96e77c2b-cd88-4b8c-ab17-6cc5a54a5c2d", 00:19:10.761 "is_configured": true, 00:19:10.761 "data_offset": 2048, 00:19:10.761 "data_size": 63488 00:19:10.761 }, 00:19:10.761 { 00:19:10.761 "name": "BaseBdev2", 00:19:10.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.761 "is_configured": false, 00:19:10.761 "data_offset": 0, 00:19:10.761 "data_size": 0 00:19:10.761 }, 00:19:10.761 { 00:19:10.761 "name": "BaseBdev3", 00:19:10.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.761 "is_configured": false, 00:19:10.761 "data_offset": 0, 00:19:10.761 "data_size": 0 00:19:10.761 } 00:19:10.761 ] 00:19:10.761 }' 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.761 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.021 [2024-11-20 11:31:18.835826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.021 BaseBdev2 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.021 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.022 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.022 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:11.022 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.022 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.022 [ 00:19:11.022 { 00:19:11.022 "name": "BaseBdev2", 00:19:11.022 "aliases": [ 00:19:11.022 "cdf680af-9e55-43ac-96eb-356c3c4a94d2" 00:19:11.022 ], 00:19:11.022 "product_name": "Malloc disk", 00:19:11.022 "block_size": 512, 00:19:11.022 "num_blocks": 65536, 00:19:11.022 "uuid": "cdf680af-9e55-43ac-96eb-356c3c4a94d2", 00:19:11.022 "assigned_rate_limits": { 00:19:11.022 "rw_ios_per_sec": 0, 00:19:11.022 "rw_mbytes_per_sec": 0, 00:19:11.022 "r_mbytes_per_sec": 0, 00:19:11.022 "w_mbytes_per_sec": 0 00:19:11.022 }, 00:19:11.022 "claimed": true, 00:19:11.022 "claim_type": "exclusive_write", 00:19:11.022 "zoned": false, 00:19:11.022 "supported_io_types": { 00:19:11.022 "read": true, 00:19:11.022 "write": true, 00:19:11.022 "unmap": true, 00:19:11.022 "flush": true, 00:19:11.022 "reset": true, 00:19:11.022 "nvme_admin": false, 00:19:11.022 "nvme_io": false, 00:19:11.022 "nvme_io_md": false, 00:19:11.022 "write_zeroes": true, 00:19:11.022 "zcopy": true, 00:19:11.022 "get_zone_info": false, 00:19:11.022 "zone_management": false, 00:19:11.022 "zone_append": false, 00:19:11.022 "compare": false, 00:19:11.022 "compare_and_write": false, 00:19:11.022 "abort": true, 00:19:11.022 "seek_hole": false, 00:19:11.022 "seek_data": false, 00:19:11.022 "copy": true, 00:19:11.022 "nvme_iov_md": false 00:19:11.022 }, 00:19:11.022 "memory_domains": [ 00:19:11.022 { 00:19:11.022 "dma_device_id": "system", 00:19:11.022 "dma_device_type": 1 00:19:11.281 }, 00:19:11.281 { 00:19:11.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.281 "dma_device_type": 2 00:19:11.281 } 00:19:11.281 ], 00:19:11.281 "driver_specific": {} 00:19:11.281 } 00:19:11.281 ] 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.281 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.281 "name": "Existed_Raid", 00:19:11.281 "uuid": "43a0ef7d-02cf-4867-98d5-bf55ba3b2147", 00:19:11.281 "strip_size_kb": 64, 00:19:11.281 "state": "configuring", 00:19:11.281 "raid_level": "raid5f", 00:19:11.281 "superblock": true, 00:19:11.281 "num_base_bdevs": 3, 00:19:11.281 "num_base_bdevs_discovered": 2, 00:19:11.281 "num_base_bdevs_operational": 3, 00:19:11.281 "base_bdevs_list": [ 00:19:11.282 { 00:19:11.282 "name": "BaseBdev1", 00:19:11.282 "uuid": "96e77c2b-cd88-4b8c-ab17-6cc5a54a5c2d", 00:19:11.282 "is_configured": true, 00:19:11.282 "data_offset": 2048, 00:19:11.282 "data_size": 63488 00:19:11.282 }, 00:19:11.282 { 00:19:11.282 "name": "BaseBdev2", 00:19:11.282 "uuid": "cdf680af-9e55-43ac-96eb-356c3c4a94d2", 00:19:11.282 "is_configured": true, 00:19:11.282 "data_offset": 2048, 00:19:11.282 "data_size": 63488 00:19:11.282 }, 00:19:11.282 { 00:19:11.282 "name": "BaseBdev3", 00:19:11.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.282 "is_configured": false, 00:19:11.282 "data_offset": 0, 00:19:11.282 "data_size": 0 00:19:11.282 } 00:19:11.282 ] 00:19:11.282 }' 00:19:11.282 11:31:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.282 11:31:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.539 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:11.539 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.539 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.798 BaseBdev3 00:19:11.798 [2024-11-20 11:31:19.418863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:11.798 [2024-11-20 11:31:19.419205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:11.798 [2024-11-20 11:31:19.419238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:11.798 [2024-11-20 11:31:19.419564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:11.798 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.798 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.799 [2024-11-20 11:31:19.424864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:11.799 [2024-11-20 11:31:19.425024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:11.799 [2024-11-20 11:31:19.425527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.799 [ 00:19:11.799 { 00:19:11.799 "name": "BaseBdev3", 00:19:11.799 "aliases": [ 00:19:11.799 "9cb61b86-e209-448d-ab14-0e19b3eb2197" 00:19:11.799 ], 00:19:11.799 "product_name": "Malloc disk", 00:19:11.799 "block_size": 512, 00:19:11.799 "num_blocks": 65536, 00:19:11.799 "uuid": "9cb61b86-e209-448d-ab14-0e19b3eb2197", 00:19:11.799 "assigned_rate_limits": { 00:19:11.799 "rw_ios_per_sec": 0, 00:19:11.799 "rw_mbytes_per_sec": 0, 00:19:11.799 "r_mbytes_per_sec": 0, 00:19:11.799 "w_mbytes_per_sec": 0 00:19:11.799 }, 00:19:11.799 "claimed": true, 00:19:11.799 "claim_type": "exclusive_write", 00:19:11.799 "zoned": false, 00:19:11.799 "supported_io_types": { 00:19:11.799 "read": true, 00:19:11.799 "write": true, 00:19:11.799 "unmap": true, 00:19:11.799 "flush": true, 00:19:11.799 "reset": true, 00:19:11.799 "nvme_admin": false, 00:19:11.799 "nvme_io": false, 00:19:11.799 "nvme_io_md": false, 00:19:11.799 "write_zeroes": true, 00:19:11.799 "zcopy": true, 00:19:11.799 "get_zone_info": false, 00:19:11.799 "zone_management": false, 00:19:11.799 "zone_append": false, 00:19:11.799 "compare": false, 00:19:11.799 "compare_and_write": false, 00:19:11.799 "abort": true, 00:19:11.799 "seek_hole": false, 00:19:11.799 "seek_data": false, 00:19:11.799 "copy": true, 00:19:11.799 "nvme_iov_md": false 00:19:11.799 }, 00:19:11.799 "memory_domains": [ 00:19:11.799 { 00:19:11.799 "dma_device_id": "system", 00:19:11.799 "dma_device_type": 1 00:19:11.799 }, 00:19:11.799 { 00:19:11.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.799 "dma_device_type": 2 00:19:11.799 } 00:19:11.799 ], 00:19:11.799 "driver_specific": {} 00:19:11.799 } 00:19:11.799 ] 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.799 "name": "Existed_Raid", 00:19:11.799 "uuid": "43a0ef7d-02cf-4867-98d5-bf55ba3b2147", 00:19:11.799 "strip_size_kb": 64, 00:19:11.799 "state": "online", 00:19:11.799 "raid_level": "raid5f", 00:19:11.799 "superblock": true, 00:19:11.799 "num_base_bdevs": 3, 00:19:11.799 "num_base_bdevs_discovered": 3, 00:19:11.799 "num_base_bdevs_operational": 3, 00:19:11.799 "base_bdevs_list": [ 00:19:11.799 { 00:19:11.799 "name": "BaseBdev1", 00:19:11.799 "uuid": "96e77c2b-cd88-4b8c-ab17-6cc5a54a5c2d", 00:19:11.799 "is_configured": true, 00:19:11.799 "data_offset": 2048, 00:19:11.799 "data_size": 63488 00:19:11.799 }, 00:19:11.799 { 00:19:11.799 "name": "BaseBdev2", 00:19:11.799 "uuid": "cdf680af-9e55-43ac-96eb-356c3c4a94d2", 00:19:11.799 "is_configured": true, 00:19:11.799 "data_offset": 2048, 00:19:11.799 "data_size": 63488 00:19:11.799 }, 00:19:11.799 { 00:19:11.799 "name": "BaseBdev3", 00:19:11.799 "uuid": "9cb61b86-e209-448d-ab14-0e19b3eb2197", 00:19:11.799 "is_configured": true, 00:19:11.799 "data_offset": 2048, 00:19:11.799 "data_size": 63488 00:19:11.799 } 00:19:11.799 ] 00:19:11.799 }' 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.799 11:31:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.367 [2024-11-20 11:31:20.015687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.367 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:12.367 "name": "Existed_Raid", 00:19:12.367 "aliases": [ 00:19:12.367 "43a0ef7d-02cf-4867-98d5-bf55ba3b2147" 00:19:12.367 ], 00:19:12.367 "product_name": "Raid Volume", 00:19:12.367 "block_size": 512, 00:19:12.367 "num_blocks": 126976, 00:19:12.367 "uuid": "43a0ef7d-02cf-4867-98d5-bf55ba3b2147", 00:19:12.367 "assigned_rate_limits": { 00:19:12.367 "rw_ios_per_sec": 0, 00:19:12.367 "rw_mbytes_per_sec": 0, 00:19:12.367 "r_mbytes_per_sec": 0, 00:19:12.367 "w_mbytes_per_sec": 0 00:19:12.367 }, 00:19:12.367 "claimed": false, 00:19:12.367 "zoned": false, 00:19:12.367 "supported_io_types": { 00:19:12.367 "read": true, 00:19:12.367 "write": true, 00:19:12.367 "unmap": false, 00:19:12.367 "flush": false, 00:19:12.367 "reset": true, 00:19:12.367 "nvme_admin": false, 00:19:12.367 "nvme_io": false, 00:19:12.367 "nvme_io_md": false, 00:19:12.367 "write_zeroes": true, 00:19:12.367 "zcopy": false, 00:19:12.367 "get_zone_info": false, 00:19:12.367 "zone_management": false, 00:19:12.367 "zone_append": false, 00:19:12.367 "compare": false, 00:19:12.367 "compare_and_write": false, 00:19:12.367 "abort": false, 00:19:12.367 "seek_hole": false, 00:19:12.368 "seek_data": false, 00:19:12.368 "copy": false, 00:19:12.368 "nvme_iov_md": false 00:19:12.368 }, 00:19:12.368 "driver_specific": { 00:19:12.368 "raid": { 00:19:12.368 "uuid": "43a0ef7d-02cf-4867-98d5-bf55ba3b2147", 00:19:12.368 "strip_size_kb": 64, 00:19:12.368 "state": "online", 00:19:12.368 "raid_level": "raid5f", 00:19:12.368 "superblock": true, 00:19:12.368 "num_base_bdevs": 3, 00:19:12.368 "num_base_bdevs_discovered": 3, 00:19:12.368 "num_base_bdevs_operational": 3, 00:19:12.368 "base_bdevs_list": [ 00:19:12.368 { 00:19:12.368 "name": "BaseBdev1", 00:19:12.368 "uuid": "96e77c2b-cd88-4b8c-ab17-6cc5a54a5c2d", 00:19:12.368 "is_configured": true, 00:19:12.368 "data_offset": 2048, 00:19:12.368 "data_size": 63488 00:19:12.368 }, 00:19:12.368 { 00:19:12.368 "name": "BaseBdev2", 00:19:12.368 "uuid": "cdf680af-9e55-43ac-96eb-356c3c4a94d2", 00:19:12.368 "is_configured": true, 00:19:12.368 "data_offset": 2048, 00:19:12.368 "data_size": 63488 00:19:12.368 }, 00:19:12.368 { 00:19:12.368 "name": "BaseBdev3", 00:19:12.368 "uuid": "9cb61b86-e209-448d-ab14-0e19b3eb2197", 00:19:12.368 "is_configured": true, 00:19:12.368 "data_offset": 2048, 00:19:12.368 "data_size": 63488 00:19:12.368 } 00:19:12.368 ] 00:19:12.368 } 00:19:12.368 } 00:19:12.368 }' 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:12.368 BaseBdev2 00:19:12.368 BaseBdev3' 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.368 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.627 [2024-11-20 11:31:20.355667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.627 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.628 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.628 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.628 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.628 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.628 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.887 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.887 "name": "Existed_Raid", 00:19:12.887 "uuid": "43a0ef7d-02cf-4867-98d5-bf55ba3b2147", 00:19:12.887 "strip_size_kb": 64, 00:19:12.887 "state": "online", 00:19:12.887 "raid_level": "raid5f", 00:19:12.887 "superblock": true, 00:19:12.887 "num_base_bdevs": 3, 00:19:12.887 "num_base_bdevs_discovered": 2, 00:19:12.887 "num_base_bdevs_operational": 2, 00:19:12.887 "base_bdevs_list": [ 00:19:12.887 { 00:19:12.887 "name": null, 00:19:12.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.887 "is_configured": false, 00:19:12.887 "data_offset": 0, 00:19:12.887 "data_size": 63488 00:19:12.887 }, 00:19:12.887 { 00:19:12.887 "name": "BaseBdev2", 00:19:12.887 "uuid": "cdf680af-9e55-43ac-96eb-356c3c4a94d2", 00:19:12.887 "is_configured": true, 00:19:12.887 "data_offset": 2048, 00:19:12.887 "data_size": 63488 00:19:12.887 }, 00:19:12.887 { 00:19:12.887 "name": "BaseBdev3", 00:19:12.887 "uuid": "9cb61b86-e209-448d-ab14-0e19b3eb2197", 00:19:12.887 "is_configured": true, 00:19:12.887 "data_offset": 2048, 00:19:12.887 "data_size": 63488 00:19:12.887 } 00:19:12.887 ] 00:19:12.887 }' 00:19:12.887 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.887 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.145 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:13.145 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:13.145 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.145 11:31:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:13.145 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.145 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.145 11:31:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.405 [2024-11-20 11:31:21.019434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:13.405 [2024-11-20 11:31:21.019770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.405 [2024-11-20 11:31:21.106215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.405 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.405 [2024-11-20 11:31:21.166309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:13.405 [2024-11-20 11:31:21.166508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 BaseBdev2 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 [ 00:19:13.665 { 00:19:13.665 "name": "BaseBdev2", 00:19:13.665 "aliases": [ 00:19:13.665 "047dffe4-4e7c-4d35-9849-ea563b1e55c8" 00:19:13.665 ], 00:19:13.665 "product_name": "Malloc disk", 00:19:13.665 "block_size": 512, 00:19:13.665 "num_blocks": 65536, 00:19:13.665 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:13.665 "assigned_rate_limits": { 00:19:13.665 "rw_ios_per_sec": 0, 00:19:13.665 "rw_mbytes_per_sec": 0, 00:19:13.665 "r_mbytes_per_sec": 0, 00:19:13.665 "w_mbytes_per_sec": 0 00:19:13.665 }, 00:19:13.665 "claimed": false, 00:19:13.665 "zoned": false, 00:19:13.665 "supported_io_types": { 00:19:13.665 "read": true, 00:19:13.665 "write": true, 00:19:13.665 "unmap": true, 00:19:13.665 "flush": true, 00:19:13.665 "reset": true, 00:19:13.665 "nvme_admin": false, 00:19:13.665 "nvme_io": false, 00:19:13.665 "nvme_io_md": false, 00:19:13.665 "write_zeroes": true, 00:19:13.665 "zcopy": true, 00:19:13.665 "get_zone_info": false, 00:19:13.665 "zone_management": false, 00:19:13.665 "zone_append": false, 00:19:13.665 "compare": false, 00:19:13.665 "compare_and_write": false, 00:19:13.665 "abort": true, 00:19:13.665 "seek_hole": false, 00:19:13.665 "seek_data": false, 00:19:13.665 "copy": true, 00:19:13.665 "nvme_iov_md": false 00:19:13.665 }, 00:19:13.665 "memory_domains": [ 00:19:13.665 { 00:19:13.665 "dma_device_id": "system", 00:19:13.665 "dma_device_type": 1 00:19:13.665 }, 00:19:13.665 { 00:19:13.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.665 "dma_device_type": 2 00:19:13.665 } 00:19:13.665 ], 00:19:13.665 "driver_specific": {} 00:19:13.665 } 00:19:13.665 ] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 BaseBdev3 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.665 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.665 [ 00:19:13.665 { 00:19:13.665 "name": "BaseBdev3", 00:19:13.665 "aliases": [ 00:19:13.665 "6b80047c-abd0-4707-a868-8f05b5a46131" 00:19:13.665 ], 00:19:13.665 "product_name": "Malloc disk", 00:19:13.665 "block_size": 512, 00:19:13.665 "num_blocks": 65536, 00:19:13.665 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:13.665 "assigned_rate_limits": { 00:19:13.665 "rw_ios_per_sec": 0, 00:19:13.665 "rw_mbytes_per_sec": 0, 00:19:13.665 "r_mbytes_per_sec": 0, 00:19:13.665 "w_mbytes_per_sec": 0 00:19:13.665 }, 00:19:13.665 "claimed": false, 00:19:13.665 "zoned": false, 00:19:13.665 "supported_io_types": { 00:19:13.665 "read": true, 00:19:13.665 "write": true, 00:19:13.665 "unmap": true, 00:19:13.665 "flush": true, 00:19:13.665 "reset": true, 00:19:13.665 "nvme_admin": false, 00:19:13.665 "nvme_io": false, 00:19:13.665 "nvme_io_md": false, 00:19:13.666 "write_zeroes": true, 00:19:13.666 "zcopy": true, 00:19:13.666 "get_zone_info": false, 00:19:13.666 "zone_management": false, 00:19:13.666 "zone_append": false, 00:19:13.666 "compare": false, 00:19:13.666 "compare_and_write": false, 00:19:13.666 "abort": true, 00:19:13.666 "seek_hole": false, 00:19:13.666 "seek_data": false, 00:19:13.666 "copy": true, 00:19:13.666 "nvme_iov_md": false 00:19:13.666 }, 00:19:13.666 "memory_domains": [ 00:19:13.666 { 00:19:13.666 "dma_device_id": "system", 00:19:13.666 "dma_device_type": 1 00:19:13.666 }, 00:19:13.666 { 00:19:13.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.666 "dma_device_type": 2 00:19:13.666 } 00:19:13.666 ], 00:19:13.666 "driver_specific": {} 00:19:13.666 } 00:19:13.666 ] 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.666 [2024-11-20 11:31:21.454675] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:13.666 [2024-11-20 11:31:21.454856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:13.666 [2024-11-20 11:31:21.454905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.666 [2024-11-20 11:31:21.457329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.666 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.924 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.924 "name": "Existed_Raid", 00:19:13.924 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:13.924 "strip_size_kb": 64, 00:19:13.924 "state": "configuring", 00:19:13.924 "raid_level": "raid5f", 00:19:13.924 "superblock": true, 00:19:13.924 "num_base_bdevs": 3, 00:19:13.924 "num_base_bdevs_discovered": 2, 00:19:13.924 "num_base_bdevs_operational": 3, 00:19:13.924 "base_bdevs_list": [ 00:19:13.924 { 00:19:13.924 "name": "BaseBdev1", 00:19:13.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.924 "is_configured": false, 00:19:13.924 "data_offset": 0, 00:19:13.925 "data_size": 0 00:19:13.925 }, 00:19:13.925 { 00:19:13.925 "name": "BaseBdev2", 00:19:13.925 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:13.925 "is_configured": true, 00:19:13.925 "data_offset": 2048, 00:19:13.925 "data_size": 63488 00:19:13.925 }, 00:19:13.925 { 00:19:13.925 "name": "BaseBdev3", 00:19:13.925 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:13.925 "is_configured": true, 00:19:13.925 "data_offset": 2048, 00:19:13.925 "data_size": 63488 00:19:13.925 } 00:19:13.925 ] 00:19:13.925 }' 00:19:13.925 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.925 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.183 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.184 [2024-11-20 11:31:21.982776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.184 11:31:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.184 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.443 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.443 "name": "Existed_Raid", 00:19:14.443 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:14.443 "strip_size_kb": 64, 00:19:14.443 "state": "configuring", 00:19:14.443 "raid_level": "raid5f", 00:19:14.443 "superblock": true, 00:19:14.443 "num_base_bdevs": 3, 00:19:14.443 "num_base_bdevs_discovered": 1, 00:19:14.443 "num_base_bdevs_operational": 3, 00:19:14.443 "base_bdevs_list": [ 00:19:14.443 { 00:19:14.443 "name": "BaseBdev1", 00:19:14.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.443 "is_configured": false, 00:19:14.443 "data_offset": 0, 00:19:14.443 "data_size": 0 00:19:14.443 }, 00:19:14.443 { 00:19:14.443 "name": null, 00:19:14.443 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:14.443 "is_configured": false, 00:19:14.443 "data_offset": 0, 00:19:14.443 "data_size": 63488 00:19:14.443 }, 00:19:14.443 { 00:19:14.443 "name": "BaseBdev3", 00:19:14.443 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:14.443 "is_configured": true, 00:19:14.443 "data_offset": 2048, 00:19:14.443 "data_size": 63488 00:19:14.443 } 00:19:14.443 ] 00:19:14.443 }' 00:19:14.443 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.443 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.702 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.702 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:14.702 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.702 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.960 [2024-11-20 11:31:22.624842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.960 BaseBdev1 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.960 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.960 [ 00:19:14.960 { 00:19:14.960 "name": "BaseBdev1", 00:19:14.960 "aliases": [ 00:19:14.960 "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e" 00:19:14.960 ], 00:19:14.960 "product_name": "Malloc disk", 00:19:14.960 "block_size": 512, 00:19:14.960 "num_blocks": 65536, 00:19:14.960 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:14.960 "assigned_rate_limits": { 00:19:14.960 "rw_ios_per_sec": 0, 00:19:14.960 "rw_mbytes_per_sec": 0, 00:19:14.960 "r_mbytes_per_sec": 0, 00:19:14.960 "w_mbytes_per_sec": 0 00:19:14.960 }, 00:19:14.960 "claimed": true, 00:19:14.960 "claim_type": "exclusive_write", 00:19:14.960 "zoned": false, 00:19:14.960 "supported_io_types": { 00:19:14.960 "read": true, 00:19:14.960 "write": true, 00:19:14.960 "unmap": true, 00:19:14.960 "flush": true, 00:19:14.960 "reset": true, 00:19:14.960 "nvme_admin": false, 00:19:14.960 "nvme_io": false, 00:19:14.960 "nvme_io_md": false, 00:19:14.961 "write_zeroes": true, 00:19:14.961 "zcopy": true, 00:19:14.961 "get_zone_info": false, 00:19:14.961 "zone_management": false, 00:19:14.961 "zone_append": false, 00:19:14.961 "compare": false, 00:19:14.961 "compare_and_write": false, 00:19:14.961 "abort": true, 00:19:14.961 "seek_hole": false, 00:19:14.961 "seek_data": false, 00:19:14.961 "copy": true, 00:19:14.961 "nvme_iov_md": false 00:19:14.961 }, 00:19:14.961 "memory_domains": [ 00:19:14.961 { 00:19:14.961 "dma_device_id": "system", 00:19:14.961 "dma_device_type": 1 00:19:14.961 }, 00:19:14.961 { 00:19:14.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.961 "dma_device_type": 2 00:19:14.961 } 00:19:14.961 ], 00:19:14.961 "driver_specific": {} 00:19:14.961 } 00:19:14.961 ] 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.961 "name": "Existed_Raid", 00:19:14.961 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:14.961 "strip_size_kb": 64, 00:19:14.961 "state": "configuring", 00:19:14.961 "raid_level": "raid5f", 00:19:14.961 "superblock": true, 00:19:14.961 "num_base_bdevs": 3, 00:19:14.961 "num_base_bdevs_discovered": 2, 00:19:14.961 "num_base_bdevs_operational": 3, 00:19:14.961 "base_bdevs_list": [ 00:19:14.961 { 00:19:14.961 "name": "BaseBdev1", 00:19:14.961 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:14.961 "is_configured": true, 00:19:14.961 "data_offset": 2048, 00:19:14.961 "data_size": 63488 00:19:14.961 }, 00:19:14.961 { 00:19:14.961 "name": null, 00:19:14.961 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:14.961 "is_configured": false, 00:19:14.961 "data_offset": 0, 00:19:14.961 "data_size": 63488 00:19:14.961 }, 00:19:14.961 { 00:19:14.961 "name": "BaseBdev3", 00:19:14.961 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:14.961 "is_configured": true, 00:19:14.961 "data_offset": 2048, 00:19:14.961 "data_size": 63488 00:19:14.961 } 00:19:14.961 ] 00:19:14.961 }' 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.961 11:31:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.529 [2024-11-20 11:31:23.229074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.529 "name": "Existed_Raid", 00:19:15.529 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:15.529 "strip_size_kb": 64, 00:19:15.529 "state": "configuring", 00:19:15.529 "raid_level": "raid5f", 00:19:15.529 "superblock": true, 00:19:15.529 "num_base_bdevs": 3, 00:19:15.529 "num_base_bdevs_discovered": 1, 00:19:15.529 "num_base_bdevs_operational": 3, 00:19:15.529 "base_bdevs_list": [ 00:19:15.529 { 00:19:15.529 "name": "BaseBdev1", 00:19:15.529 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:15.529 "is_configured": true, 00:19:15.529 "data_offset": 2048, 00:19:15.529 "data_size": 63488 00:19:15.529 }, 00:19:15.529 { 00:19:15.529 "name": null, 00:19:15.529 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:15.529 "is_configured": false, 00:19:15.529 "data_offset": 0, 00:19:15.529 "data_size": 63488 00:19:15.529 }, 00:19:15.529 { 00:19:15.529 "name": null, 00:19:15.529 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:15.529 "is_configured": false, 00:19:15.529 "data_offset": 0, 00:19:15.529 "data_size": 63488 00:19:15.529 } 00:19:15.529 ] 00:19:15.529 }' 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.529 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.098 [2024-11-20 11:31:23.757249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.098 "name": "Existed_Raid", 00:19:16.098 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:16.098 "strip_size_kb": 64, 00:19:16.098 "state": "configuring", 00:19:16.098 "raid_level": "raid5f", 00:19:16.098 "superblock": true, 00:19:16.098 "num_base_bdevs": 3, 00:19:16.098 "num_base_bdevs_discovered": 2, 00:19:16.098 "num_base_bdevs_operational": 3, 00:19:16.098 "base_bdevs_list": [ 00:19:16.098 { 00:19:16.098 "name": "BaseBdev1", 00:19:16.098 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:16.098 "is_configured": true, 00:19:16.098 "data_offset": 2048, 00:19:16.098 "data_size": 63488 00:19:16.098 }, 00:19:16.098 { 00:19:16.098 "name": null, 00:19:16.098 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:16.098 "is_configured": false, 00:19:16.098 "data_offset": 0, 00:19:16.098 "data_size": 63488 00:19:16.098 }, 00:19:16.098 { 00:19:16.098 "name": "BaseBdev3", 00:19:16.098 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:16.098 "is_configured": true, 00:19:16.098 "data_offset": 2048, 00:19:16.098 "data_size": 63488 00:19:16.098 } 00:19:16.098 ] 00:19:16.098 }' 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.098 11:31:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.666 [2024-11-20 11:31:24.333431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.666 "name": "Existed_Raid", 00:19:16.666 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:16.666 "strip_size_kb": 64, 00:19:16.666 "state": "configuring", 00:19:16.666 "raid_level": "raid5f", 00:19:16.666 "superblock": true, 00:19:16.666 "num_base_bdevs": 3, 00:19:16.666 "num_base_bdevs_discovered": 1, 00:19:16.666 "num_base_bdevs_operational": 3, 00:19:16.666 "base_bdevs_list": [ 00:19:16.666 { 00:19:16.666 "name": null, 00:19:16.666 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:16.666 "is_configured": false, 00:19:16.666 "data_offset": 0, 00:19:16.666 "data_size": 63488 00:19:16.666 }, 00:19:16.666 { 00:19:16.666 "name": null, 00:19:16.666 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:16.666 "is_configured": false, 00:19:16.666 "data_offset": 0, 00:19:16.666 "data_size": 63488 00:19:16.666 }, 00:19:16.666 { 00:19:16.666 "name": "BaseBdev3", 00:19:16.666 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:16.666 "is_configured": true, 00:19:16.666 "data_offset": 2048, 00:19:16.666 "data_size": 63488 00:19:16.666 } 00:19:16.666 ] 00:19:16.666 }' 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.666 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.230 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.230 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.230 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:17.230 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.230 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.231 [2024-11-20 11:31:24.976814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.231 11:31:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.231 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.231 "name": "Existed_Raid", 00:19:17.231 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:17.231 "strip_size_kb": 64, 00:19:17.231 "state": "configuring", 00:19:17.231 "raid_level": "raid5f", 00:19:17.231 "superblock": true, 00:19:17.231 "num_base_bdevs": 3, 00:19:17.231 "num_base_bdevs_discovered": 2, 00:19:17.231 "num_base_bdevs_operational": 3, 00:19:17.231 "base_bdevs_list": [ 00:19:17.231 { 00:19:17.231 "name": null, 00:19:17.231 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:17.231 "is_configured": false, 00:19:17.231 "data_offset": 0, 00:19:17.231 "data_size": 63488 00:19:17.231 }, 00:19:17.231 { 00:19:17.231 "name": "BaseBdev2", 00:19:17.231 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:17.231 "is_configured": true, 00:19:17.231 "data_offset": 2048, 00:19:17.231 "data_size": 63488 00:19:17.231 }, 00:19:17.231 { 00:19:17.231 "name": "BaseBdev3", 00:19:17.231 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:17.231 "is_configured": true, 00:19:17.231 "data_offset": 2048, 00:19:17.231 "data_size": 63488 00:19:17.231 } 00:19:17.231 ] 00:19:17.231 }' 00:19:17.231 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.231 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fe3e37b0-64ed-4ed3-8fe0-76cd36af425e 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.798 NewBaseBdev 00:19:17.798 [2024-11-20 11:31:25.630881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:17.798 [2024-11-20 11:31:25.631167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:17.798 [2024-11-20 11:31:25.631192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:17.798 [2024-11-20 11:31:25.631496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.798 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.798 [2024-11-20 11:31:25.636646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:17.799 [2024-11-20 11:31:25.636791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:17.799 [2024-11-20 11:31:25.637317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.059 [ 00:19:18.059 { 00:19:18.059 "name": "NewBaseBdev", 00:19:18.059 "aliases": [ 00:19:18.059 "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e" 00:19:18.059 ], 00:19:18.059 "product_name": "Malloc disk", 00:19:18.059 "block_size": 512, 00:19:18.059 "num_blocks": 65536, 00:19:18.059 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:18.059 "assigned_rate_limits": { 00:19:18.059 "rw_ios_per_sec": 0, 00:19:18.059 "rw_mbytes_per_sec": 0, 00:19:18.059 "r_mbytes_per_sec": 0, 00:19:18.059 "w_mbytes_per_sec": 0 00:19:18.059 }, 00:19:18.059 "claimed": true, 00:19:18.059 "claim_type": "exclusive_write", 00:19:18.059 "zoned": false, 00:19:18.059 "supported_io_types": { 00:19:18.059 "read": true, 00:19:18.059 "write": true, 00:19:18.059 "unmap": true, 00:19:18.059 "flush": true, 00:19:18.059 "reset": true, 00:19:18.059 "nvme_admin": false, 00:19:18.059 "nvme_io": false, 00:19:18.059 "nvme_io_md": false, 00:19:18.059 "write_zeroes": true, 00:19:18.059 "zcopy": true, 00:19:18.059 "get_zone_info": false, 00:19:18.059 "zone_management": false, 00:19:18.059 "zone_append": false, 00:19:18.059 "compare": false, 00:19:18.059 "compare_and_write": false, 00:19:18.059 "abort": true, 00:19:18.059 "seek_hole": false, 00:19:18.059 "seek_data": false, 00:19:18.059 "copy": true, 00:19:18.059 "nvme_iov_md": false 00:19:18.059 }, 00:19:18.059 "memory_domains": [ 00:19:18.059 { 00:19:18.059 "dma_device_id": "system", 00:19:18.059 "dma_device_type": 1 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.059 "dma_device_type": 2 00:19:18.059 } 00:19:18.059 ], 00:19:18.059 "driver_specific": {} 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.059 "name": "Existed_Raid", 00:19:18.059 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:18.059 "strip_size_kb": 64, 00:19:18.059 "state": "online", 00:19:18.059 "raid_level": "raid5f", 00:19:18.059 "superblock": true, 00:19:18.059 "num_base_bdevs": 3, 00:19:18.059 "num_base_bdevs_discovered": 3, 00:19:18.059 "num_base_bdevs_operational": 3, 00:19:18.059 "base_bdevs_list": [ 00:19:18.059 { 00:19:18.059 "name": "NewBaseBdev", 00:19:18.059 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:18.059 "is_configured": true, 00:19:18.059 "data_offset": 2048, 00:19:18.059 "data_size": 63488 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "name": "BaseBdev2", 00:19:18.059 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:18.059 "is_configured": true, 00:19:18.059 "data_offset": 2048, 00:19:18.059 "data_size": 63488 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "name": "BaseBdev3", 00:19:18.059 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:18.059 "is_configured": true, 00:19:18.059 "data_offset": 2048, 00:19:18.059 "data_size": 63488 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 }' 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.059 11:31:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.626 [2024-11-20 11:31:26.175435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:18.626 "name": "Existed_Raid", 00:19:18.626 "aliases": [ 00:19:18.626 "9df7821c-eef4-4e83-b7ba-959fc2c16dee" 00:19:18.626 ], 00:19:18.626 "product_name": "Raid Volume", 00:19:18.626 "block_size": 512, 00:19:18.626 "num_blocks": 126976, 00:19:18.626 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:18.626 "assigned_rate_limits": { 00:19:18.626 "rw_ios_per_sec": 0, 00:19:18.626 "rw_mbytes_per_sec": 0, 00:19:18.626 "r_mbytes_per_sec": 0, 00:19:18.626 "w_mbytes_per_sec": 0 00:19:18.626 }, 00:19:18.626 "claimed": false, 00:19:18.626 "zoned": false, 00:19:18.626 "supported_io_types": { 00:19:18.626 "read": true, 00:19:18.626 "write": true, 00:19:18.626 "unmap": false, 00:19:18.626 "flush": false, 00:19:18.626 "reset": true, 00:19:18.626 "nvme_admin": false, 00:19:18.626 "nvme_io": false, 00:19:18.626 "nvme_io_md": false, 00:19:18.626 "write_zeroes": true, 00:19:18.626 "zcopy": false, 00:19:18.626 "get_zone_info": false, 00:19:18.626 "zone_management": false, 00:19:18.626 "zone_append": false, 00:19:18.626 "compare": false, 00:19:18.626 "compare_and_write": false, 00:19:18.626 "abort": false, 00:19:18.626 "seek_hole": false, 00:19:18.626 "seek_data": false, 00:19:18.626 "copy": false, 00:19:18.626 "nvme_iov_md": false 00:19:18.626 }, 00:19:18.626 "driver_specific": { 00:19:18.626 "raid": { 00:19:18.626 "uuid": "9df7821c-eef4-4e83-b7ba-959fc2c16dee", 00:19:18.626 "strip_size_kb": 64, 00:19:18.626 "state": "online", 00:19:18.626 "raid_level": "raid5f", 00:19:18.626 "superblock": true, 00:19:18.626 "num_base_bdevs": 3, 00:19:18.626 "num_base_bdevs_discovered": 3, 00:19:18.626 "num_base_bdevs_operational": 3, 00:19:18.626 "base_bdevs_list": [ 00:19:18.626 { 00:19:18.626 "name": "NewBaseBdev", 00:19:18.626 "uuid": "fe3e37b0-64ed-4ed3-8fe0-76cd36af425e", 00:19:18.626 "is_configured": true, 00:19:18.626 "data_offset": 2048, 00:19:18.626 "data_size": 63488 00:19:18.626 }, 00:19:18.626 { 00:19:18.626 "name": "BaseBdev2", 00:19:18.626 "uuid": "047dffe4-4e7c-4d35-9849-ea563b1e55c8", 00:19:18.626 "is_configured": true, 00:19:18.626 "data_offset": 2048, 00:19:18.626 "data_size": 63488 00:19:18.626 }, 00:19:18.626 { 00:19:18.626 "name": "BaseBdev3", 00:19:18.626 "uuid": "6b80047c-abd0-4707-a868-8f05b5a46131", 00:19:18.626 "is_configured": true, 00:19:18.626 "data_offset": 2048, 00:19:18.626 "data_size": 63488 00:19:18.626 } 00:19:18.626 ] 00:19:18.626 } 00:19:18.626 } 00:19:18.626 }' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:18.626 BaseBdev2 00:19:18.626 BaseBdev3' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.626 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.885 [2024-11-20 11:31:26.511314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:18.885 [2024-11-20 11:31:26.511529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.885 [2024-11-20 11:31:26.511756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.885 [2024-11-20 11:31:26.512232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.885 [2024-11-20 11:31:26.512267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80767 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80767 ']' 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80767 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80767 00:19:18.885 killing process with pid 80767 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80767' 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80767 00:19:18.885 [2024-11-20 11:31:26.545256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:18.885 11:31:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80767 00:19:19.143 [2024-11-20 11:31:26.822068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:20.080 11:31:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:20.080 00:19:20.080 real 0m11.717s 00:19:20.080 user 0m19.427s 00:19:20.080 sys 0m1.589s 00:19:20.080 11:31:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.080 11:31:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.080 ************************************ 00:19:20.080 END TEST raid5f_state_function_test_sb 00:19:20.080 ************************************ 00:19:20.080 11:31:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:19:20.080 11:31:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:20.080 11:31:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.080 11:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:20.338 ************************************ 00:19:20.338 START TEST raid5f_superblock_test 00:19:20.338 ************************************ 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:20.338 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81393 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81393 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81393 ']' 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.339 11:31:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.339 [2024-11-20 11:31:28.042516] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:19:20.339 [2024-11-20 11:31:28.042741] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81393 ] 00:19:20.597 [2024-11-20 11:31:28.227263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.597 [2024-11-20 11:31:28.374349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.855 [2024-11-20 11:31:28.576243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.855 [2024-11-20 11:31:28.576298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 malloc1 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 [2024-11-20 11:31:29.098193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:21.466 [2024-11-20 11:31:29.098449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.466 [2024-11-20 11:31:29.098646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:21.466 [2024-11-20 11:31:29.098780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.466 [2024-11-20 11:31:29.101738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.466 [2024-11-20 11:31:29.101903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:21.466 pt1 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 malloc2 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 [2024-11-20 11:31:29.154525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:21.466 [2024-11-20 11:31:29.154840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.466 [2024-11-20 11:31:29.154921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:21.466 [2024-11-20 11:31:29.155107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.466 [2024-11-20 11:31:29.158032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.466 [2024-11-20 11:31:29.158078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:21.466 pt2 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 malloc3 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 [2024-11-20 11:31:29.223643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:21.466 [2024-11-20 11:31:29.223724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.466 [2024-11-20 11:31:29.223760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:21.466 [2024-11-20 11:31:29.223776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.466 [2024-11-20 11:31:29.226669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.466 pt3 00:19:21.466 [2024-11-20 11:31:29.226873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 [2024-11-20 11:31:29.231849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:21.466 [2024-11-20 11:31:29.234330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.466 [2024-11-20 11:31:29.234562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:21.466 [2024-11-20 11:31:29.234842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:21.466 [2024-11-20 11:31:29.234874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:21.466 [2024-11-20 11:31:29.235210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:21.466 [2024-11-20 11:31:29.240427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:21.466 [2024-11-20 11:31:29.240456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:21.466 [2024-11-20 11:31:29.240823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.466 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.466 "name": "raid_bdev1", 00:19:21.466 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:21.466 "strip_size_kb": 64, 00:19:21.466 "state": "online", 00:19:21.466 "raid_level": "raid5f", 00:19:21.466 "superblock": true, 00:19:21.466 "num_base_bdevs": 3, 00:19:21.466 "num_base_bdevs_discovered": 3, 00:19:21.466 "num_base_bdevs_operational": 3, 00:19:21.466 "base_bdevs_list": [ 00:19:21.466 { 00:19:21.466 "name": "pt1", 00:19:21.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.466 "is_configured": true, 00:19:21.467 "data_offset": 2048, 00:19:21.467 "data_size": 63488 00:19:21.467 }, 00:19:21.467 { 00:19:21.467 "name": "pt2", 00:19:21.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.467 "is_configured": true, 00:19:21.467 "data_offset": 2048, 00:19:21.467 "data_size": 63488 00:19:21.467 }, 00:19:21.467 { 00:19:21.467 "name": "pt3", 00:19:21.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.467 "is_configured": true, 00:19:21.467 "data_offset": 2048, 00:19:21.467 "data_size": 63488 00:19:21.467 } 00:19:21.467 ] 00:19:21.467 }' 00:19:21.467 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.467 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.032 [2024-11-20 11:31:29.743465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:22.032 "name": "raid_bdev1", 00:19:22.032 "aliases": [ 00:19:22.032 "99b2f736-688a-4997-8763-453bed3b01bf" 00:19:22.032 ], 00:19:22.032 "product_name": "Raid Volume", 00:19:22.032 "block_size": 512, 00:19:22.032 "num_blocks": 126976, 00:19:22.032 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:22.032 "assigned_rate_limits": { 00:19:22.032 "rw_ios_per_sec": 0, 00:19:22.032 "rw_mbytes_per_sec": 0, 00:19:22.032 "r_mbytes_per_sec": 0, 00:19:22.032 "w_mbytes_per_sec": 0 00:19:22.032 }, 00:19:22.032 "claimed": false, 00:19:22.032 "zoned": false, 00:19:22.032 "supported_io_types": { 00:19:22.032 "read": true, 00:19:22.032 "write": true, 00:19:22.032 "unmap": false, 00:19:22.032 "flush": false, 00:19:22.032 "reset": true, 00:19:22.032 "nvme_admin": false, 00:19:22.032 "nvme_io": false, 00:19:22.032 "nvme_io_md": false, 00:19:22.032 "write_zeroes": true, 00:19:22.032 "zcopy": false, 00:19:22.032 "get_zone_info": false, 00:19:22.032 "zone_management": false, 00:19:22.032 "zone_append": false, 00:19:22.032 "compare": false, 00:19:22.032 "compare_and_write": false, 00:19:22.032 "abort": false, 00:19:22.032 "seek_hole": false, 00:19:22.032 "seek_data": false, 00:19:22.032 "copy": false, 00:19:22.032 "nvme_iov_md": false 00:19:22.032 }, 00:19:22.032 "driver_specific": { 00:19:22.032 "raid": { 00:19:22.032 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:22.032 "strip_size_kb": 64, 00:19:22.032 "state": "online", 00:19:22.032 "raid_level": "raid5f", 00:19:22.032 "superblock": true, 00:19:22.032 "num_base_bdevs": 3, 00:19:22.032 "num_base_bdevs_discovered": 3, 00:19:22.032 "num_base_bdevs_operational": 3, 00:19:22.032 "base_bdevs_list": [ 00:19:22.032 { 00:19:22.032 "name": "pt1", 00:19:22.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:22.032 "is_configured": true, 00:19:22.032 "data_offset": 2048, 00:19:22.032 "data_size": 63488 00:19:22.032 }, 00:19:22.032 { 00:19:22.032 "name": "pt2", 00:19:22.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.032 "is_configured": true, 00:19:22.032 "data_offset": 2048, 00:19:22.032 "data_size": 63488 00:19:22.032 }, 00:19:22.032 { 00:19:22.032 "name": "pt3", 00:19:22.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.032 "is_configured": true, 00:19:22.032 "data_offset": 2048, 00:19:22.032 "data_size": 63488 00:19:22.032 } 00:19:22.032 ] 00:19:22.032 } 00:19:22.032 } 00:19:22.032 }' 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:22.032 pt2 00:19:22.032 pt3' 00:19:22.032 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.290 11:31:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:22.290 [2024-11-20 11:31:30.063532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=99b2f736-688a-4997-8763-453bed3b01bf 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 99b2f736-688a-4997-8763-453bed3b01bf ']' 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.290 [2024-11-20 11:31:30.119305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.290 [2024-11-20 11:31:30.119500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.290 [2024-11-20 11:31:30.119644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.290 [2024-11-20 11:31:30.119760] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.290 [2024-11-20 11:31:30.119777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.290 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.548 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.548 [2024-11-20 11:31:30.263401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:22.548 [2024-11-20 11:31:30.266103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:22.548 [2024-11-20 11:31:30.266179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:22.548 [2024-11-20 11:31:30.266256] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:22.548 [2024-11-20 11:31:30.266342] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:22.548 [2024-11-20 11:31:30.266377] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:22.548 [2024-11-20 11:31:30.266406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.548 [2024-11-20 11:31:30.266420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:22.548 request: 00:19:22.548 { 00:19:22.548 "name": "raid_bdev1", 00:19:22.548 "raid_level": "raid5f", 00:19:22.548 "base_bdevs": [ 00:19:22.548 "malloc1", 00:19:22.548 "malloc2", 00:19:22.548 "malloc3" 00:19:22.548 ], 00:19:22.548 "strip_size_kb": 64, 00:19:22.549 "superblock": false, 00:19:22.549 "method": "bdev_raid_create", 00:19:22.549 "req_id": 1 00:19:22.549 } 00:19:22.549 Got JSON-RPC error response 00:19:22.549 response: 00:19:22.549 { 00:19:22.549 "code": -17, 00:19:22.549 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:22.549 } 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.549 [2024-11-20 11:31:30.339341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:22.549 [2024-11-20 11:31:30.339548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.549 [2024-11-20 11:31:30.339636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:22.549 [2024-11-20 11:31:30.339796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.549 [2024-11-20 11:31:30.342692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.549 [2024-11-20 11:31:30.342840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:22.549 [2024-11-20 11:31:30.343054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:22.549 [2024-11-20 11:31:30.343229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:22.549 pt1 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.549 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.807 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.807 "name": "raid_bdev1", 00:19:22.807 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:22.807 "strip_size_kb": 64, 00:19:22.807 "state": "configuring", 00:19:22.807 "raid_level": "raid5f", 00:19:22.807 "superblock": true, 00:19:22.807 "num_base_bdevs": 3, 00:19:22.807 "num_base_bdevs_discovered": 1, 00:19:22.807 "num_base_bdevs_operational": 3, 00:19:22.807 "base_bdevs_list": [ 00:19:22.807 { 00:19:22.807 "name": "pt1", 00:19:22.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:22.807 "is_configured": true, 00:19:22.807 "data_offset": 2048, 00:19:22.807 "data_size": 63488 00:19:22.807 }, 00:19:22.807 { 00:19:22.807 "name": null, 00:19:22.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.807 "is_configured": false, 00:19:22.807 "data_offset": 2048, 00:19:22.807 "data_size": 63488 00:19:22.807 }, 00:19:22.807 { 00:19:22.807 "name": null, 00:19:22.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.807 "is_configured": false, 00:19:22.807 "data_offset": 2048, 00:19:22.807 "data_size": 63488 00:19:22.807 } 00:19:22.807 ] 00:19:22.807 }' 00:19:22.807 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.807 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.066 [2024-11-20 11:31:30.823707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:23.066 [2024-11-20 11:31:30.823920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.066 [2024-11-20 11:31:30.823966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:23.066 [2024-11-20 11:31:30.823984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.066 [2024-11-20 11:31:30.824547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.066 [2024-11-20 11:31:30.824587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:23.066 [2024-11-20 11:31:30.824713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:23.066 [2024-11-20 11:31:30.824747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.066 pt2 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.066 [2024-11-20 11:31:30.831699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.066 "name": "raid_bdev1", 00:19:23.066 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:23.066 "strip_size_kb": 64, 00:19:23.066 "state": "configuring", 00:19:23.066 "raid_level": "raid5f", 00:19:23.066 "superblock": true, 00:19:23.066 "num_base_bdevs": 3, 00:19:23.066 "num_base_bdevs_discovered": 1, 00:19:23.066 "num_base_bdevs_operational": 3, 00:19:23.066 "base_bdevs_list": [ 00:19:23.066 { 00:19:23.066 "name": "pt1", 00:19:23.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:23.066 "is_configured": true, 00:19:23.066 "data_offset": 2048, 00:19:23.066 "data_size": 63488 00:19:23.066 }, 00:19:23.066 { 00:19:23.066 "name": null, 00:19:23.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.066 "is_configured": false, 00:19:23.066 "data_offset": 0, 00:19:23.066 "data_size": 63488 00:19:23.066 }, 00:19:23.066 { 00:19:23.066 "name": null, 00:19:23.066 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.066 "is_configured": false, 00:19:23.066 "data_offset": 2048, 00:19:23.066 "data_size": 63488 00:19:23.066 } 00:19:23.066 ] 00:19:23.066 }' 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.066 11:31:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.631 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:23.631 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:23.631 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:23.631 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.631 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.631 [2024-11-20 11:31:31.319862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:23.631 [2024-11-20 11:31:31.320078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.632 [2024-11-20 11:31:31.320116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:23.632 [2024-11-20 11:31:31.320135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.632 [2024-11-20 11:31:31.320721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.632 [2024-11-20 11:31:31.320757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:23.632 [2024-11-20 11:31:31.320871] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:23.632 [2024-11-20 11:31:31.320908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.632 pt2 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.632 [2024-11-20 11:31:31.327834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:23.632 [2024-11-20 11:31:31.328011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.632 [2024-11-20 11:31:31.328074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:23.632 [2024-11-20 11:31:31.328279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.632 [2024-11-20 11:31:31.328862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.632 [2024-11-20 11:31:31.329023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:23.632 [2024-11-20 11:31:31.329225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:23.632 [2024-11-20 11:31:31.329375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:23.632 [2024-11-20 11:31:31.329692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:23.632 [2024-11-20 11:31:31.329825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:23.632 [2024-11-20 11:31:31.330273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:23.632 [2024-11-20 11:31:31.335419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:23.632 pt3 00:19:23.632 [2024-11-20 11:31:31.335548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:23.632 [2024-11-20 11:31:31.335830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.632 "name": "raid_bdev1", 00:19:23.632 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:23.632 "strip_size_kb": 64, 00:19:23.632 "state": "online", 00:19:23.632 "raid_level": "raid5f", 00:19:23.632 "superblock": true, 00:19:23.632 "num_base_bdevs": 3, 00:19:23.632 "num_base_bdevs_discovered": 3, 00:19:23.632 "num_base_bdevs_operational": 3, 00:19:23.632 "base_bdevs_list": [ 00:19:23.632 { 00:19:23.632 "name": "pt1", 00:19:23.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:23.632 "is_configured": true, 00:19:23.632 "data_offset": 2048, 00:19:23.632 "data_size": 63488 00:19:23.632 }, 00:19:23.632 { 00:19:23.632 "name": "pt2", 00:19:23.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.632 "is_configured": true, 00:19:23.632 "data_offset": 2048, 00:19:23.632 "data_size": 63488 00:19:23.632 }, 00:19:23.632 { 00:19:23.632 "name": "pt3", 00:19:23.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.632 "is_configured": true, 00:19:23.632 "data_offset": 2048, 00:19:23.632 "data_size": 63488 00:19:23.632 } 00:19:23.632 ] 00:19:23.632 }' 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.632 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.199 [2024-11-20 11:31:31.849912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:24.199 "name": "raid_bdev1", 00:19:24.199 "aliases": [ 00:19:24.199 "99b2f736-688a-4997-8763-453bed3b01bf" 00:19:24.199 ], 00:19:24.199 "product_name": "Raid Volume", 00:19:24.199 "block_size": 512, 00:19:24.199 "num_blocks": 126976, 00:19:24.199 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:24.199 "assigned_rate_limits": { 00:19:24.199 "rw_ios_per_sec": 0, 00:19:24.199 "rw_mbytes_per_sec": 0, 00:19:24.199 "r_mbytes_per_sec": 0, 00:19:24.199 "w_mbytes_per_sec": 0 00:19:24.199 }, 00:19:24.199 "claimed": false, 00:19:24.199 "zoned": false, 00:19:24.199 "supported_io_types": { 00:19:24.199 "read": true, 00:19:24.199 "write": true, 00:19:24.199 "unmap": false, 00:19:24.199 "flush": false, 00:19:24.199 "reset": true, 00:19:24.199 "nvme_admin": false, 00:19:24.199 "nvme_io": false, 00:19:24.199 "nvme_io_md": false, 00:19:24.199 "write_zeroes": true, 00:19:24.199 "zcopy": false, 00:19:24.199 "get_zone_info": false, 00:19:24.199 "zone_management": false, 00:19:24.199 "zone_append": false, 00:19:24.199 "compare": false, 00:19:24.199 "compare_and_write": false, 00:19:24.199 "abort": false, 00:19:24.199 "seek_hole": false, 00:19:24.199 "seek_data": false, 00:19:24.199 "copy": false, 00:19:24.199 "nvme_iov_md": false 00:19:24.199 }, 00:19:24.199 "driver_specific": { 00:19:24.199 "raid": { 00:19:24.199 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:24.199 "strip_size_kb": 64, 00:19:24.199 "state": "online", 00:19:24.199 "raid_level": "raid5f", 00:19:24.199 "superblock": true, 00:19:24.199 "num_base_bdevs": 3, 00:19:24.199 "num_base_bdevs_discovered": 3, 00:19:24.199 "num_base_bdevs_operational": 3, 00:19:24.199 "base_bdevs_list": [ 00:19:24.199 { 00:19:24.199 "name": "pt1", 00:19:24.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:24.199 "is_configured": true, 00:19:24.199 "data_offset": 2048, 00:19:24.199 "data_size": 63488 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "name": "pt2", 00:19:24.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.199 "is_configured": true, 00:19:24.199 "data_offset": 2048, 00:19:24.199 "data_size": 63488 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "name": "pt3", 00:19:24.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:24.199 "is_configured": true, 00:19:24.199 "data_offset": 2048, 00:19:24.199 "data_size": 63488 00:19:24.199 } 00:19:24.199 ] 00:19:24.199 } 00:19:24.199 } 00:19:24.199 }' 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:24.199 pt2 00:19:24.199 pt3' 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.199 11:31:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.199 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.199 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:24.199 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:24.199 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.458 [2024-11-20 11:31:32.141916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 99b2f736-688a-4997-8763-453bed3b01bf '!=' 99b2f736-688a-4997-8763-453bed3b01bf ']' 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.458 [2024-11-20 11:31:32.189754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.458 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.459 "name": "raid_bdev1", 00:19:24.459 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:24.459 "strip_size_kb": 64, 00:19:24.459 "state": "online", 00:19:24.459 "raid_level": "raid5f", 00:19:24.459 "superblock": true, 00:19:24.459 "num_base_bdevs": 3, 00:19:24.459 "num_base_bdevs_discovered": 2, 00:19:24.459 "num_base_bdevs_operational": 2, 00:19:24.459 "base_bdevs_list": [ 00:19:24.459 { 00:19:24.459 "name": null, 00:19:24.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.459 "is_configured": false, 00:19:24.459 "data_offset": 0, 00:19:24.459 "data_size": 63488 00:19:24.459 }, 00:19:24.459 { 00:19:24.459 "name": "pt2", 00:19:24.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.459 "is_configured": true, 00:19:24.459 "data_offset": 2048, 00:19:24.459 "data_size": 63488 00:19:24.459 }, 00:19:24.459 { 00:19:24.459 "name": "pt3", 00:19:24.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:24.459 "is_configured": true, 00:19:24.459 "data_offset": 2048, 00:19:24.459 "data_size": 63488 00:19:24.459 } 00:19:24.459 ] 00:19:24.459 }' 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.459 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.026 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:25.026 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.026 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.026 [2024-11-20 11:31:32.701878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.027 [2024-11-20 11:31:32.701913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.027 [2024-11-20 11:31:32.702022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.027 [2024-11-20 11:31:32.702137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.027 [2024-11-20 11:31:32.702161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 [2024-11-20 11:31:32.785858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:25.027 [2024-11-20 11:31:32.786053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.027 [2024-11-20 11:31:32.786148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:25.027 [2024-11-20 11:31:32.786270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.027 [2024-11-20 11:31:32.789188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.027 pt2 00:19:25.027 [2024-11-20 11:31:32.789354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:25.027 [2024-11-20 11:31:32.789468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:25.027 [2024-11-20 11:31:32.789535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.027 "name": "raid_bdev1", 00:19:25.027 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:25.027 "strip_size_kb": 64, 00:19:25.027 "state": "configuring", 00:19:25.027 "raid_level": "raid5f", 00:19:25.027 "superblock": true, 00:19:25.027 "num_base_bdevs": 3, 00:19:25.027 "num_base_bdevs_discovered": 1, 00:19:25.027 "num_base_bdevs_operational": 2, 00:19:25.027 "base_bdevs_list": [ 00:19:25.027 { 00:19:25.027 "name": null, 00:19:25.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.027 "is_configured": false, 00:19:25.027 "data_offset": 2048, 00:19:25.027 "data_size": 63488 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "name": "pt2", 00:19:25.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.027 "is_configured": true, 00:19:25.027 "data_offset": 2048, 00:19:25.027 "data_size": 63488 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "name": null, 00:19:25.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:25.027 "is_configured": false, 00:19:25.027 "data_offset": 2048, 00:19:25.027 "data_size": 63488 00:19:25.027 } 00:19:25.027 ] 00:19:25.027 }' 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.027 11:31:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.595 [2024-11-20 11:31:33.354025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:25.595 [2024-11-20 11:31:33.354290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.595 [2024-11-20 11:31:33.354367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:25.595 [2024-11-20 11:31:33.354649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.595 [2024-11-20 11:31:33.355288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.595 [2024-11-20 11:31:33.355320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:25.595 [2024-11-20 11:31:33.355433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:25.595 [2024-11-20 11:31:33.355480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:25.595 [2024-11-20 11:31:33.355645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:25.595 [2024-11-20 11:31:33.355666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:25.595 [2024-11-20 11:31:33.355981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:25.595 [2024-11-20 11:31:33.361117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:25.595 [2024-11-20 11:31:33.361144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:25.595 pt3 00:19:25.595 [2024-11-20 11:31:33.361563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.595 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.595 "name": "raid_bdev1", 00:19:25.595 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:25.595 "strip_size_kb": 64, 00:19:25.595 "state": "online", 00:19:25.595 "raid_level": "raid5f", 00:19:25.595 "superblock": true, 00:19:25.595 "num_base_bdevs": 3, 00:19:25.595 "num_base_bdevs_discovered": 2, 00:19:25.595 "num_base_bdevs_operational": 2, 00:19:25.595 "base_bdevs_list": [ 00:19:25.595 { 00:19:25.595 "name": null, 00:19:25.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.595 "is_configured": false, 00:19:25.595 "data_offset": 2048, 00:19:25.595 "data_size": 63488 00:19:25.595 }, 00:19:25.595 { 00:19:25.595 "name": "pt2", 00:19:25.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.595 "is_configured": true, 00:19:25.595 "data_offset": 2048, 00:19:25.595 "data_size": 63488 00:19:25.595 }, 00:19:25.595 { 00:19:25.595 "name": "pt3", 00:19:25.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:25.595 "is_configured": true, 00:19:25.595 "data_offset": 2048, 00:19:25.595 "data_size": 63488 00:19:25.595 } 00:19:25.595 ] 00:19:25.595 }' 00:19:25.596 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.596 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.162 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:26.162 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.162 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.162 [2024-11-20 11:31:33.899315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:26.162 [2024-11-20 11:31:33.899357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:26.162 [2024-11-20 11:31:33.899452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.162 [2024-11-20 11:31:33.899537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.162 [2024-11-20 11:31:33.899553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:26.162 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.162 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:26.162 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 [2024-11-20 11:31:33.963336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:26.163 [2024-11-20 11:31:33.963414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.163 [2024-11-20 11:31:33.963453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:26.163 [2024-11-20 11:31:33.963479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.163 [2024-11-20 11:31:33.966476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.163 [2024-11-20 11:31:33.966516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:26.163 [2024-11-20 11:31:33.966635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:26.163 [2024-11-20 11:31:33.966695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:26.163 [2024-11-20 11:31:33.966860] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:26.163 [2024-11-20 11:31:33.966878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:26.163 [2024-11-20 11:31:33.966902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:26.163 [2024-11-20 11:31:33.966975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:26.163 pt1 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 11:31:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.421 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.421 "name": "raid_bdev1", 00:19:26.421 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:26.421 "strip_size_kb": 64, 00:19:26.421 "state": "configuring", 00:19:26.421 "raid_level": "raid5f", 00:19:26.421 "superblock": true, 00:19:26.421 "num_base_bdevs": 3, 00:19:26.421 "num_base_bdevs_discovered": 1, 00:19:26.421 "num_base_bdevs_operational": 2, 00:19:26.421 "base_bdevs_list": [ 00:19:26.421 { 00:19:26.421 "name": null, 00:19:26.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.421 "is_configured": false, 00:19:26.421 "data_offset": 2048, 00:19:26.421 "data_size": 63488 00:19:26.421 }, 00:19:26.421 { 00:19:26.421 "name": "pt2", 00:19:26.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.421 "is_configured": true, 00:19:26.421 "data_offset": 2048, 00:19:26.421 "data_size": 63488 00:19:26.421 }, 00:19:26.421 { 00:19:26.421 "name": null, 00:19:26.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:26.421 "is_configured": false, 00:19:26.421 "data_offset": 2048, 00:19:26.421 "data_size": 63488 00:19:26.421 } 00:19:26.421 ] 00:19:26.421 }' 00:19:26.421 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.421 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.988 [2024-11-20 11:31:34.575536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:26.988 [2024-11-20 11:31:34.575608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.988 [2024-11-20 11:31:34.575655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:26.988 [2024-11-20 11:31:34.575671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.988 [2024-11-20 11:31:34.576297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.988 [2024-11-20 11:31:34.576339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:26.988 [2024-11-20 11:31:34.576456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:26.988 [2024-11-20 11:31:34.576493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:26.988 [2024-11-20 11:31:34.576694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:26.988 [2024-11-20 11:31:34.576716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:26.988 [2024-11-20 11:31:34.577051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:26.988 [2024-11-20 11:31:34.582192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:26.988 [2024-11-20 11:31:34.582234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:26.988 [2024-11-20 11:31:34.582555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.988 pt3 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.988 "name": "raid_bdev1", 00:19:26.988 "uuid": "99b2f736-688a-4997-8763-453bed3b01bf", 00:19:26.988 "strip_size_kb": 64, 00:19:26.988 "state": "online", 00:19:26.988 "raid_level": "raid5f", 00:19:26.988 "superblock": true, 00:19:26.988 "num_base_bdevs": 3, 00:19:26.988 "num_base_bdevs_discovered": 2, 00:19:26.988 "num_base_bdevs_operational": 2, 00:19:26.988 "base_bdevs_list": [ 00:19:26.988 { 00:19:26.988 "name": null, 00:19:26.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.988 "is_configured": false, 00:19:26.988 "data_offset": 2048, 00:19:26.988 "data_size": 63488 00:19:26.988 }, 00:19:26.988 { 00:19:26.988 "name": "pt2", 00:19:26.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.988 "is_configured": true, 00:19:26.988 "data_offset": 2048, 00:19:26.988 "data_size": 63488 00:19:26.988 }, 00:19:26.988 { 00:19:26.988 "name": "pt3", 00:19:26.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:26.988 "is_configured": true, 00:19:26.988 "data_offset": 2048, 00:19:26.988 "data_size": 63488 00:19:26.988 } 00:19:26.988 ] 00:19:26.988 }' 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.988 11:31:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.298 11:31:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:27.298 11:31:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:27.298 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.298 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.298 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:27.557 [2024-11-20 11:31:35.164596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 99b2f736-688a-4997-8763-453bed3b01bf '!=' 99b2f736-688a-4997-8763-453bed3b01bf ']' 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81393 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81393 ']' 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81393 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81393 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.557 killing process with pid 81393 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81393' 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81393 00:19:27.557 [2024-11-20 11:31:35.242917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:27.557 11:31:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81393 00:19:27.557 [2024-11-20 11:31:35.243037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.557 [2024-11-20 11:31:35.243114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.557 [2024-11-20 11:31:35.243134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:27.816 [2024-11-20 11:31:35.516653] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:28.751 11:31:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:28.751 00:19:28.751 real 0m8.618s 00:19:28.751 user 0m14.071s 00:19:28.751 sys 0m1.242s 00:19:28.751 11:31:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.751 11:31:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.751 ************************************ 00:19:28.751 END TEST raid5f_superblock_test 00:19:28.751 ************************************ 00:19:28.751 11:31:36 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:28.751 11:31:36 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:19:28.751 11:31:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:28.751 11:31:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.751 11:31:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:28.751 ************************************ 00:19:28.751 START TEST raid5f_rebuild_test 00:19:28.751 ************************************ 00:19:28.751 11:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:19:28.751 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:28.751 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81849 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81849 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81849 ']' 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.010 11:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.010 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:29.010 Zero copy mechanism will not be used. 00:19:29.010 [2024-11-20 11:31:36.702153] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:19:29.010 [2024-11-20 11:31:36.702305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81849 ] 00:19:29.269 [2024-11-20 11:31:36.876285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.269 [2024-11-20 11:31:37.013419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.527 [2024-11-20 11:31:37.215386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:29.527 [2024-11-20 11:31:37.215466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 BaseBdev1_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 [2024-11-20 11:31:37.702974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:30.094 [2024-11-20 11:31:37.703049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.094 [2024-11-20 11:31:37.703082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:30.094 [2024-11-20 11:31:37.703102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.094 [2024-11-20 11:31:37.705876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.094 [2024-11-20 11:31:37.705926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:30.094 BaseBdev1 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 BaseBdev2_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 [2024-11-20 11:31:37.750714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:30.094 [2024-11-20 11:31:37.750784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.094 [2024-11-20 11:31:37.750812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:30.094 [2024-11-20 11:31:37.750832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.094 [2024-11-20 11:31:37.753594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.094 [2024-11-20 11:31:37.753656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:30.094 BaseBdev2 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 BaseBdev3_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 [2024-11-20 11:31:37.822437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:30.094 [2024-11-20 11:31:37.822516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.094 [2024-11-20 11:31:37.822554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:30.094 [2024-11-20 11:31:37.822578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.094 [2024-11-20 11:31:37.825901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.094 [2024-11-20 11:31:37.825958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:30.094 BaseBdev3 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 spare_malloc 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 spare_delay 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 [2024-11-20 11:31:37.886138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:30.094 [2024-11-20 11:31:37.886214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.094 [2024-11-20 11:31:37.886246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:30.094 [2024-11-20 11:31:37.886268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.094 [2024-11-20 11:31:37.889632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.094 [2024-11-20 11:31:37.889689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:30.094 spare 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.094 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.094 [2024-11-20 11:31:37.894478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.094 [2024-11-20 11:31:37.897375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.094 [2024-11-20 11:31:37.897493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:30.094 [2024-11-20 11:31:37.897662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:30.094 [2024-11-20 11:31:37.897685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:30.094 [2024-11-20 11:31:37.898108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:30.094 [2024-11-20 11:31:37.904505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:30.095 [2024-11-20 11:31:37.904548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:30.095 [2024-11-20 11:31:37.904879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.095 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.353 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.353 "name": "raid_bdev1", 00:19:30.353 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:30.353 "strip_size_kb": 64, 00:19:30.353 "state": "online", 00:19:30.353 "raid_level": "raid5f", 00:19:30.353 "superblock": false, 00:19:30.353 "num_base_bdevs": 3, 00:19:30.353 "num_base_bdevs_discovered": 3, 00:19:30.353 "num_base_bdevs_operational": 3, 00:19:30.353 "base_bdevs_list": [ 00:19:30.353 { 00:19:30.353 "name": "BaseBdev1", 00:19:30.353 "uuid": "69113ff4-21ae-5fb4-99ea-b4acf48e6f13", 00:19:30.353 "is_configured": true, 00:19:30.353 "data_offset": 0, 00:19:30.353 "data_size": 65536 00:19:30.353 }, 00:19:30.353 { 00:19:30.353 "name": "BaseBdev2", 00:19:30.353 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:30.353 "is_configured": true, 00:19:30.353 "data_offset": 0, 00:19:30.353 "data_size": 65536 00:19:30.353 }, 00:19:30.353 { 00:19:30.353 "name": "BaseBdev3", 00:19:30.353 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:30.353 "is_configured": true, 00:19:30.353 "data_offset": 0, 00:19:30.353 "data_size": 65536 00:19:30.353 } 00:19:30.353 ] 00:19:30.353 }' 00:19:30.353 11:31:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.353 11:31:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:30.611 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.611 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:30.611 [2024-11-20 11:31:38.452195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.885 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.885 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:19:30.885 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.885 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:30.885 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:30.886 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:31.156 [2024-11-20 11:31:38.856115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:31.156 /dev/nbd0 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:31.156 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:31.157 1+0 records in 00:19:31.157 1+0 records out 00:19:31.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279397 s, 14.7 MB/s 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:31.157 11:31:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:19:31.723 512+0 records in 00:19:31.723 512+0 records out 00:19:31.723 67108864 bytes (67 MB, 64 MiB) copied, 0.486363 s, 138 MB/s 00:19:31.723 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:31.723 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:31.723 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:31.723 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.723 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:31.724 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.724 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.982 [2024-11-20 11:31:39.671172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.982 [2024-11-20 11:31:39.680935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.982 "name": "raid_bdev1", 00:19:31.982 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:31.982 "strip_size_kb": 64, 00:19:31.982 "state": "online", 00:19:31.982 "raid_level": "raid5f", 00:19:31.982 "superblock": false, 00:19:31.982 "num_base_bdevs": 3, 00:19:31.982 "num_base_bdevs_discovered": 2, 00:19:31.982 "num_base_bdevs_operational": 2, 00:19:31.982 "base_bdevs_list": [ 00:19:31.982 { 00:19:31.982 "name": null, 00:19:31.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.982 "is_configured": false, 00:19:31.982 "data_offset": 0, 00:19:31.982 "data_size": 65536 00:19:31.982 }, 00:19:31.982 { 00:19:31.982 "name": "BaseBdev2", 00:19:31.982 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:31.982 "is_configured": true, 00:19:31.982 "data_offset": 0, 00:19:31.982 "data_size": 65536 00:19:31.982 }, 00:19:31.982 { 00:19:31.982 "name": "BaseBdev3", 00:19:31.982 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:31.982 "is_configured": true, 00:19:31.982 "data_offset": 0, 00:19:31.982 "data_size": 65536 00:19:31.982 } 00:19:31.982 ] 00:19:31.982 }' 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.982 11:31:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.550 11:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.550 11:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.550 11:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.550 [2024-11-20 11:31:40.185089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.550 [2024-11-20 11:31:40.200509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:19:32.550 11:31:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.550 11:31:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:32.550 [2024-11-20 11:31:40.207869] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.486 "name": "raid_bdev1", 00:19:33.486 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:33.486 "strip_size_kb": 64, 00:19:33.486 "state": "online", 00:19:33.486 "raid_level": "raid5f", 00:19:33.486 "superblock": false, 00:19:33.486 "num_base_bdevs": 3, 00:19:33.486 "num_base_bdevs_discovered": 3, 00:19:33.486 "num_base_bdevs_operational": 3, 00:19:33.486 "process": { 00:19:33.486 "type": "rebuild", 00:19:33.486 "target": "spare", 00:19:33.486 "progress": { 00:19:33.486 "blocks": 18432, 00:19:33.486 "percent": 14 00:19:33.486 } 00:19:33.486 }, 00:19:33.486 "base_bdevs_list": [ 00:19:33.486 { 00:19:33.486 "name": "spare", 00:19:33.486 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:33.486 "is_configured": true, 00:19:33.486 "data_offset": 0, 00:19:33.486 "data_size": 65536 00:19:33.486 }, 00:19:33.486 { 00:19:33.486 "name": "BaseBdev2", 00:19:33.486 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:33.486 "is_configured": true, 00:19:33.486 "data_offset": 0, 00:19:33.486 "data_size": 65536 00:19:33.486 }, 00:19:33.486 { 00:19:33.486 "name": "BaseBdev3", 00:19:33.486 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:33.486 "is_configured": true, 00:19:33.486 "data_offset": 0, 00:19:33.486 "data_size": 65536 00:19:33.486 } 00:19:33.486 ] 00:19:33.486 }' 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.486 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.745 [2024-11-20 11:31:41.362194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.745 [2024-11-20 11:31:41.423432] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:33.745 [2024-11-20 11:31:41.423543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.745 [2024-11-20 11:31:41.423572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.745 [2024-11-20 11:31:41.423584] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.745 "name": "raid_bdev1", 00:19:33.745 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:33.745 "strip_size_kb": 64, 00:19:33.745 "state": "online", 00:19:33.745 "raid_level": "raid5f", 00:19:33.745 "superblock": false, 00:19:33.745 "num_base_bdevs": 3, 00:19:33.745 "num_base_bdevs_discovered": 2, 00:19:33.745 "num_base_bdevs_operational": 2, 00:19:33.745 "base_bdevs_list": [ 00:19:33.745 { 00:19:33.745 "name": null, 00:19:33.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.745 "is_configured": false, 00:19:33.745 "data_offset": 0, 00:19:33.745 "data_size": 65536 00:19:33.745 }, 00:19:33.745 { 00:19:33.745 "name": "BaseBdev2", 00:19:33.745 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:33.745 "is_configured": true, 00:19:33.745 "data_offset": 0, 00:19:33.745 "data_size": 65536 00:19:33.745 }, 00:19:33.745 { 00:19:33.745 "name": "BaseBdev3", 00:19:33.745 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:33.745 "is_configured": true, 00:19:33.745 "data_offset": 0, 00:19:33.745 "data_size": 65536 00:19:33.745 } 00:19:33.745 ] 00:19:33.745 }' 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.745 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.313 11:31:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.313 "name": "raid_bdev1", 00:19:34.313 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:34.313 "strip_size_kb": 64, 00:19:34.313 "state": "online", 00:19:34.313 "raid_level": "raid5f", 00:19:34.313 "superblock": false, 00:19:34.313 "num_base_bdevs": 3, 00:19:34.313 "num_base_bdevs_discovered": 2, 00:19:34.313 "num_base_bdevs_operational": 2, 00:19:34.313 "base_bdevs_list": [ 00:19:34.313 { 00:19:34.313 "name": null, 00:19:34.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.313 "is_configured": false, 00:19:34.313 "data_offset": 0, 00:19:34.313 "data_size": 65536 00:19:34.313 }, 00:19:34.313 { 00:19:34.313 "name": "BaseBdev2", 00:19:34.313 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:34.313 "is_configured": true, 00:19:34.313 "data_offset": 0, 00:19:34.313 "data_size": 65536 00:19:34.313 }, 00:19:34.313 { 00:19:34.313 "name": "BaseBdev3", 00:19:34.313 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:34.313 "is_configured": true, 00:19:34.313 "data_offset": 0, 00:19:34.313 "data_size": 65536 00:19:34.313 } 00:19:34.313 ] 00:19:34.313 }' 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.313 [2024-11-20 11:31:42.135224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.313 [2024-11-20 11:31:42.149709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.313 11:31:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:34.313 [2024-11-20 11:31:42.156940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.690 11:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.691 "name": "raid_bdev1", 00:19:35.691 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:35.691 "strip_size_kb": 64, 00:19:35.691 "state": "online", 00:19:35.691 "raid_level": "raid5f", 00:19:35.691 "superblock": false, 00:19:35.691 "num_base_bdevs": 3, 00:19:35.691 "num_base_bdevs_discovered": 3, 00:19:35.691 "num_base_bdevs_operational": 3, 00:19:35.691 "process": { 00:19:35.691 "type": "rebuild", 00:19:35.691 "target": "spare", 00:19:35.691 "progress": { 00:19:35.691 "blocks": 18432, 00:19:35.691 "percent": 14 00:19:35.691 } 00:19:35.691 }, 00:19:35.691 "base_bdevs_list": [ 00:19:35.691 { 00:19:35.691 "name": "spare", 00:19:35.691 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 0, 00:19:35.691 "data_size": 65536 00:19:35.691 }, 00:19:35.691 { 00:19:35.691 "name": "BaseBdev2", 00:19:35.691 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 0, 00:19:35.691 "data_size": 65536 00:19:35.691 }, 00:19:35.691 { 00:19:35.691 "name": "BaseBdev3", 00:19:35.691 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 0, 00:19:35.691 "data_size": 65536 00:19:35.691 } 00:19:35.691 ] 00:19:35.691 }' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=593 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.691 "name": "raid_bdev1", 00:19:35.691 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:35.691 "strip_size_kb": 64, 00:19:35.691 "state": "online", 00:19:35.691 "raid_level": "raid5f", 00:19:35.691 "superblock": false, 00:19:35.691 "num_base_bdevs": 3, 00:19:35.691 "num_base_bdevs_discovered": 3, 00:19:35.691 "num_base_bdevs_operational": 3, 00:19:35.691 "process": { 00:19:35.691 "type": "rebuild", 00:19:35.691 "target": "spare", 00:19:35.691 "progress": { 00:19:35.691 "blocks": 22528, 00:19:35.691 "percent": 17 00:19:35.691 } 00:19:35.691 }, 00:19:35.691 "base_bdevs_list": [ 00:19:35.691 { 00:19:35.691 "name": "spare", 00:19:35.691 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 0, 00:19:35.691 "data_size": 65536 00:19:35.691 }, 00:19:35.691 { 00:19:35.691 "name": "BaseBdev2", 00:19:35.691 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 0, 00:19:35.691 "data_size": 65536 00:19:35.691 }, 00:19:35.691 { 00:19:35.691 "name": "BaseBdev3", 00:19:35.691 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:35.691 "is_configured": true, 00:19:35.691 "data_offset": 0, 00:19:35.691 "data_size": 65536 00:19:35.691 } 00:19:35.691 ] 00:19:35.691 }' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.691 11:31:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.068 "name": "raid_bdev1", 00:19:37.068 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:37.068 "strip_size_kb": 64, 00:19:37.068 "state": "online", 00:19:37.068 "raid_level": "raid5f", 00:19:37.068 "superblock": false, 00:19:37.068 "num_base_bdevs": 3, 00:19:37.068 "num_base_bdevs_discovered": 3, 00:19:37.068 "num_base_bdevs_operational": 3, 00:19:37.068 "process": { 00:19:37.068 "type": "rebuild", 00:19:37.068 "target": "spare", 00:19:37.068 "progress": { 00:19:37.068 "blocks": 47104, 00:19:37.068 "percent": 35 00:19:37.068 } 00:19:37.068 }, 00:19:37.068 "base_bdevs_list": [ 00:19:37.068 { 00:19:37.068 "name": "spare", 00:19:37.068 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:37.068 "is_configured": true, 00:19:37.068 "data_offset": 0, 00:19:37.068 "data_size": 65536 00:19:37.068 }, 00:19:37.068 { 00:19:37.068 "name": "BaseBdev2", 00:19:37.068 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:37.068 "is_configured": true, 00:19:37.068 "data_offset": 0, 00:19:37.068 "data_size": 65536 00:19:37.068 }, 00:19:37.068 { 00:19:37.068 "name": "BaseBdev3", 00:19:37.068 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:37.068 "is_configured": true, 00:19:37.068 "data_offset": 0, 00:19:37.068 "data_size": 65536 00:19:37.068 } 00:19:37.068 ] 00:19:37.068 }' 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.068 11:31:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.004 11:31:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.005 "name": "raid_bdev1", 00:19:38.005 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:38.005 "strip_size_kb": 64, 00:19:38.005 "state": "online", 00:19:38.005 "raid_level": "raid5f", 00:19:38.005 "superblock": false, 00:19:38.005 "num_base_bdevs": 3, 00:19:38.005 "num_base_bdevs_discovered": 3, 00:19:38.005 "num_base_bdevs_operational": 3, 00:19:38.005 "process": { 00:19:38.005 "type": "rebuild", 00:19:38.005 "target": "spare", 00:19:38.005 "progress": { 00:19:38.005 "blocks": 69632, 00:19:38.005 "percent": 53 00:19:38.005 } 00:19:38.005 }, 00:19:38.005 "base_bdevs_list": [ 00:19:38.005 { 00:19:38.005 "name": "spare", 00:19:38.005 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:38.005 "is_configured": true, 00:19:38.005 "data_offset": 0, 00:19:38.005 "data_size": 65536 00:19:38.005 }, 00:19:38.005 { 00:19:38.005 "name": "BaseBdev2", 00:19:38.005 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:38.005 "is_configured": true, 00:19:38.005 "data_offset": 0, 00:19:38.005 "data_size": 65536 00:19:38.005 }, 00:19:38.005 { 00:19:38.005 "name": "BaseBdev3", 00:19:38.005 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:38.005 "is_configured": true, 00:19:38.005 "data_offset": 0, 00:19:38.005 "data_size": 65536 00:19:38.005 } 00:19:38.005 ] 00:19:38.005 }' 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.005 11:31:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.388 "name": "raid_bdev1", 00:19:39.388 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:39.388 "strip_size_kb": 64, 00:19:39.388 "state": "online", 00:19:39.388 "raid_level": "raid5f", 00:19:39.388 "superblock": false, 00:19:39.388 "num_base_bdevs": 3, 00:19:39.388 "num_base_bdevs_discovered": 3, 00:19:39.388 "num_base_bdevs_operational": 3, 00:19:39.388 "process": { 00:19:39.388 "type": "rebuild", 00:19:39.388 "target": "spare", 00:19:39.388 "progress": { 00:19:39.388 "blocks": 92160, 00:19:39.388 "percent": 70 00:19:39.388 } 00:19:39.388 }, 00:19:39.388 "base_bdevs_list": [ 00:19:39.388 { 00:19:39.388 "name": "spare", 00:19:39.388 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:39.388 "is_configured": true, 00:19:39.388 "data_offset": 0, 00:19:39.388 "data_size": 65536 00:19:39.388 }, 00:19:39.388 { 00:19:39.388 "name": "BaseBdev2", 00:19:39.388 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:39.388 "is_configured": true, 00:19:39.388 "data_offset": 0, 00:19:39.388 "data_size": 65536 00:19:39.388 }, 00:19:39.388 { 00:19:39.388 "name": "BaseBdev3", 00:19:39.388 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:39.388 "is_configured": true, 00:19:39.388 "data_offset": 0, 00:19:39.388 "data_size": 65536 00:19:39.388 } 00:19:39.388 ] 00:19:39.388 }' 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.388 11:31:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.365 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.365 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.365 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.365 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.365 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.365 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.365 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.366 11:31:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.366 11:31:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.366 11:31:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.366 11:31:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.366 11:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.366 "name": "raid_bdev1", 00:19:40.366 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:40.366 "strip_size_kb": 64, 00:19:40.366 "state": "online", 00:19:40.366 "raid_level": "raid5f", 00:19:40.366 "superblock": false, 00:19:40.366 "num_base_bdevs": 3, 00:19:40.366 "num_base_bdevs_discovered": 3, 00:19:40.366 "num_base_bdevs_operational": 3, 00:19:40.366 "process": { 00:19:40.366 "type": "rebuild", 00:19:40.366 "target": "spare", 00:19:40.366 "progress": { 00:19:40.366 "blocks": 116736, 00:19:40.366 "percent": 89 00:19:40.366 } 00:19:40.366 }, 00:19:40.366 "base_bdevs_list": [ 00:19:40.366 { 00:19:40.366 "name": "spare", 00:19:40.366 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:40.366 "is_configured": true, 00:19:40.366 "data_offset": 0, 00:19:40.366 "data_size": 65536 00:19:40.366 }, 00:19:40.366 { 00:19:40.366 "name": "BaseBdev2", 00:19:40.366 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:40.366 "is_configured": true, 00:19:40.366 "data_offset": 0, 00:19:40.366 "data_size": 65536 00:19:40.366 }, 00:19:40.366 { 00:19:40.366 "name": "BaseBdev3", 00:19:40.366 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:40.366 "is_configured": true, 00:19:40.366 "data_offset": 0, 00:19:40.366 "data_size": 65536 00:19:40.366 } 00:19:40.366 ] 00:19:40.366 }' 00:19:40.366 11:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.366 11:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.366 11:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.366 11:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.366 11:31:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.933 [2024-11-20 11:31:48.637031] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:40.933 [2024-11-20 11:31:48.637168] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:40.933 [2024-11-20 11:31:48.637231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.502 "name": "raid_bdev1", 00:19:41.502 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:41.502 "strip_size_kb": 64, 00:19:41.502 "state": "online", 00:19:41.502 "raid_level": "raid5f", 00:19:41.502 "superblock": false, 00:19:41.502 "num_base_bdevs": 3, 00:19:41.502 "num_base_bdevs_discovered": 3, 00:19:41.502 "num_base_bdevs_operational": 3, 00:19:41.502 "base_bdevs_list": [ 00:19:41.502 { 00:19:41.502 "name": "spare", 00:19:41.502 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:41.502 "is_configured": true, 00:19:41.502 "data_offset": 0, 00:19:41.502 "data_size": 65536 00:19:41.502 }, 00:19:41.502 { 00:19:41.502 "name": "BaseBdev2", 00:19:41.502 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:41.502 "is_configured": true, 00:19:41.502 "data_offset": 0, 00:19:41.502 "data_size": 65536 00:19:41.502 }, 00:19:41.502 { 00:19:41.502 "name": "BaseBdev3", 00:19:41.502 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:41.502 "is_configured": true, 00:19:41.502 "data_offset": 0, 00:19:41.502 "data_size": 65536 00:19:41.502 } 00:19:41.502 ] 00:19:41.502 }' 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.502 "name": "raid_bdev1", 00:19:41.502 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:41.502 "strip_size_kb": 64, 00:19:41.502 "state": "online", 00:19:41.502 "raid_level": "raid5f", 00:19:41.502 "superblock": false, 00:19:41.502 "num_base_bdevs": 3, 00:19:41.502 "num_base_bdevs_discovered": 3, 00:19:41.502 "num_base_bdevs_operational": 3, 00:19:41.502 "base_bdevs_list": [ 00:19:41.502 { 00:19:41.502 "name": "spare", 00:19:41.502 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:41.502 "is_configured": true, 00:19:41.502 "data_offset": 0, 00:19:41.502 "data_size": 65536 00:19:41.502 }, 00:19:41.502 { 00:19:41.502 "name": "BaseBdev2", 00:19:41.502 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:41.502 "is_configured": true, 00:19:41.502 "data_offset": 0, 00:19:41.502 "data_size": 65536 00:19:41.502 }, 00:19:41.502 { 00:19:41.502 "name": "BaseBdev3", 00:19:41.502 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:41.502 "is_configured": true, 00:19:41.502 "data_offset": 0, 00:19:41.502 "data_size": 65536 00:19:41.502 } 00:19:41.502 ] 00:19:41.502 }' 00:19:41.502 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.761 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.761 "name": "raid_bdev1", 00:19:41.761 "uuid": "63a3788d-369d-46d5-a53f-dca550d2e406", 00:19:41.761 "strip_size_kb": 64, 00:19:41.761 "state": "online", 00:19:41.761 "raid_level": "raid5f", 00:19:41.761 "superblock": false, 00:19:41.761 "num_base_bdevs": 3, 00:19:41.761 "num_base_bdevs_discovered": 3, 00:19:41.761 "num_base_bdevs_operational": 3, 00:19:41.761 "base_bdevs_list": [ 00:19:41.761 { 00:19:41.761 "name": "spare", 00:19:41.761 "uuid": "61d7cafa-6b32-51a9-b475-03c7be53bd1d", 00:19:41.761 "is_configured": true, 00:19:41.761 "data_offset": 0, 00:19:41.761 "data_size": 65536 00:19:41.761 }, 00:19:41.761 { 00:19:41.761 "name": "BaseBdev2", 00:19:41.761 "uuid": "9a0f75e5-ae01-5f50-97bd-d5f66ab667c8", 00:19:41.761 "is_configured": true, 00:19:41.761 "data_offset": 0, 00:19:41.761 "data_size": 65536 00:19:41.761 }, 00:19:41.761 { 00:19:41.762 "name": "BaseBdev3", 00:19:41.762 "uuid": "b4196c86-b07c-53a0-b637-80cb629e17d1", 00:19:41.762 "is_configured": true, 00:19:41.762 "data_offset": 0, 00:19:41.762 "data_size": 65536 00:19:41.762 } 00:19:41.762 ] 00:19:41.762 }' 00:19:41.762 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.762 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.329 [2024-11-20 11:31:49.921737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.329 [2024-11-20 11:31:49.921779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.329 [2024-11-20 11:31:49.921886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.329 [2024-11-20 11:31:49.921990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.329 [2024-11-20 11:31:49.922014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.329 11:31:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:42.589 /dev/nbd0 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.589 1+0 records in 00:19:42.589 1+0 records out 00:19:42.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032866 s, 12.5 MB/s 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.589 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:42.848 /dev/nbd1 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.848 1+0 records in 00:19:42.848 1+0 records out 00:19:42.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414282 s, 9.9 MB/s 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:42.848 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:43.107 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:43.107 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:43.107 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:43.107 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:43.107 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:43.107 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.107 11:31:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:43.366 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81849 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81849 ']' 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81849 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.624 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81849 00:19:43.625 killing process with pid 81849 00:19:43.625 Received shutdown signal, test time was about 60.000000 seconds 00:19:43.625 00:19:43.625 Latency(us) 00:19:43.625 [2024-11-20T11:31:51.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.625 [2024-11-20T11:31:51.471Z] =================================================================================================================== 00:19:43.625 [2024-11-20T11:31:51.471Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.625 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.625 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.625 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81849' 00:19:43.625 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81849 00:19:43.625 11:31:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81849 00:19:43.625 [2024-11-20 11:31:51.454386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:44.192 [2024-11-20 11:31:51.812924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:45.131 00:19:45.131 real 0m16.272s 00:19:45.131 user 0m20.761s 00:19:45.131 sys 0m2.018s 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.131 ************************************ 00:19:45.131 END TEST raid5f_rebuild_test 00:19:45.131 ************************************ 00:19:45.131 11:31:52 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:19:45.131 11:31:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:45.131 11:31:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.131 11:31:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.131 ************************************ 00:19:45.131 START TEST raid5f_rebuild_test_sb 00:19:45.131 ************************************ 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:45.131 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82302 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82302 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82302 ']' 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.132 11:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.390 [2024-11-20 11:31:53.024428] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:19:45.390 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:45.390 Zero copy mechanism will not be used. 00:19:45.390 [2024-11-20 11:31:53.024587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82302 ] 00:19:45.390 [2024-11-20 11:31:53.206327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.649 [2024-11-20 11:31:53.367753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.909 [2024-11-20 11:31:53.589666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.909 [2024-11-20 11:31:53.589723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.505 BaseBdev1_malloc 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.505 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.505 [2024-11-20 11:31:54.078905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:46.505 [2024-11-20 11:31:54.079009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.505 [2024-11-20 11:31:54.079045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:46.505 [2024-11-20 11:31:54.079065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.505 [2024-11-20 11:31:54.081884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.506 [2024-11-20 11:31:54.081938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:46.506 BaseBdev1 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 BaseBdev2_malloc 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 [2024-11-20 11:31:54.136129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:46.506 [2024-11-20 11:31:54.136223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.506 [2024-11-20 11:31:54.136252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:46.506 [2024-11-20 11:31:54.136273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.506 [2024-11-20 11:31:54.139219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.506 [2024-11-20 11:31:54.139299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:46.506 BaseBdev2 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 BaseBdev3_malloc 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 [2024-11-20 11:31:54.203022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:46.506 [2024-11-20 11:31:54.203094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.506 [2024-11-20 11:31:54.203127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:46.506 [2024-11-20 11:31:54.203158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.506 [2024-11-20 11:31:54.205962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.506 [2024-11-20 11:31:54.206013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:46.506 BaseBdev3 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 spare_malloc 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 spare_delay 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 [2024-11-20 11:31:54.263911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.506 [2024-11-20 11:31:54.263979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.506 [2024-11-20 11:31:54.264006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:46.506 [2024-11-20 11:31:54.264023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.506 [2024-11-20 11:31:54.266875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.506 [2024-11-20 11:31:54.266928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.506 spare 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 [2024-11-20 11:31:54.272021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.506 [2024-11-20 11:31:54.274473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.506 [2024-11-20 11:31:54.274572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.506 [2024-11-20 11:31:54.274880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:46.506 [2024-11-20 11:31:54.274912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:46.506 [2024-11-20 11:31:54.275236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:46.506 [2024-11-20 11:31:54.280448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:46.506 [2024-11-20 11:31:54.280498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:46.506 [2024-11-20 11:31:54.280760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.506 "name": "raid_bdev1", 00:19:46.506 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:46.506 "strip_size_kb": 64, 00:19:46.506 "state": "online", 00:19:46.506 "raid_level": "raid5f", 00:19:46.506 "superblock": true, 00:19:46.506 "num_base_bdevs": 3, 00:19:46.506 "num_base_bdevs_discovered": 3, 00:19:46.506 "num_base_bdevs_operational": 3, 00:19:46.506 "base_bdevs_list": [ 00:19:46.506 { 00:19:46.506 "name": "BaseBdev1", 00:19:46.506 "uuid": "15cf264c-1f01-5370-b6b7-bb3c85431f34", 00:19:46.506 "is_configured": true, 00:19:46.506 "data_offset": 2048, 00:19:46.506 "data_size": 63488 00:19:46.506 }, 00:19:46.506 { 00:19:46.506 "name": "BaseBdev2", 00:19:46.506 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:46.506 "is_configured": true, 00:19:46.506 "data_offset": 2048, 00:19:46.506 "data_size": 63488 00:19:46.506 }, 00:19:46.506 { 00:19:46.506 "name": "BaseBdev3", 00:19:46.506 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:46.506 "is_configured": true, 00:19:46.506 "data_offset": 2048, 00:19:46.506 "data_size": 63488 00:19:46.506 } 00:19:46.506 ] 00:19:46.506 }' 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.506 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.075 [2024-11-20 11:31:54.814838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.075 11:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:47.642 [2024-11-20 11:31:55.198798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:47.642 /dev/nbd0 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:47.642 1+0 records in 00:19:47.642 1+0 records out 00:19:47.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032504 s, 12.6 MB/s 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:47.642 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:48.209 496+0 records in 00:19:48.209 496+0 records out 00:19:48.209 65011712 bytes (65 MB, 62 MiB) copied, 0.50505 s, 129 MB/s 00:19:48.209 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:48.209 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.209 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:48.209 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.209 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:48.209 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.209 11:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.468 [2024-11-20 11:31:56.082255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.468 [2024-11-20 11:31:56.096108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.468 "name": "raid_bdev1", 00:19:48.468 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:48.468 "strip_size_kb": 64, 00:19:48.468 "state": "online", 00:19:48.468 "raid_level": "raid5f", 00:19:48.468 "superblock": true, 00:19:48.468 "num_base_bdevs": 3, 00:19:48.468 "num_base_bdevs_discovered": 2, 00:19:48.468 "num_base_bdevs_operational": 2, 00:19:48.468 "base_bdevs_list": [ 00:19:48.468 { 00:19:48.468 "name": null, 00:19:48.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.468 "is_configured": false, 00:19:48.468 "data_offset": 0, 00:19:48.468 "data_size": 63488 00:19:48.468 }, 00:19:48.468 { 00:19:48.468 "name": "BaseBdev2", 00:19:48.468 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:48.468 "is_configured": true, 00:19:48.468 "data_offset": 2048, 00:19:48.468 "data_size": 63488 00:19:48.468 }, 00:19:48.468 { 00:19:48.468 "name": "BaseBdev3", 00:19:48.468 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:48.468 "is_configured": true, 00:19:48.468 "data_offset": 2048, 00:19:48.468 "data_size": 63488 00:19:48.468 } 00:19:48.468 ] 00:19:48.468 }' 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.468 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.037 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:49.037 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.037 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.037 [2024-11-20 11:31:56.616461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.037 [2024-11-20 11:31:56.632427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:49.037 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.037 11:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:49.037 [2024-11-20 11:31:56.640173] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.974 "name": "raid_bdev1", 00:19:49.974 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:49.974 "strip_size_kb": 64, 00:19:49.974 "state": "online", 00:19:49.974 "raid_level": "raid5f", 00:19:49.974 "superblock": true, 00:19:49.974 "num_base_bdevs": 3, 00:19:49.974 "num_base_bdevs_discovered": 3, 00:19:49.974 "num_base_bdevs_operational": 3, 00:19:49.974 "process": { 00:19:49.974 "type": "rebuild", 00:19:49.974 "target": "spare", 00:19:49.974 "progress": { 00:19:49.974 "blocks": 18432, 00:19:49.974 "percent": 14 00:19:49.974 } 00:19:49.974 }, 00:19:49.974 "base_bdevs_list": [ 00:19:49.974 { 00:19:49.974 "name": "spare", 00:19:49.974 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:49.974 "is_configured": true, 00:19:49.974 "data_offset": 2048, 00:19:49.974 "data_size": 63488 00:19:49.974 }, 00:19:49.974 { 00:19:49.974 "name": "BaseBdev2", 00:19:49.974 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:49.974 "is_configured": true, 00:19:49.974 "data_offset": 2048, 00:19:49.974 "data_size": 63488 00:19:49.974 }, 00:19:49.974 { 00:19:49.974 "name": "BaseBdev3", 00:19:49.974 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:49.974 "is_configured": true, 00:19:49.974 "data_offset": 2048, 00:19:49.974 "data_size": 63488 00:19:49.974 } 00:19:49.974 ] 00:19:49.974 }' 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.974 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.974 [2024-11-20 11:31:57.802285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.233 [2024-11-20 11:31:57.854687] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:50.233 [2024-11-20 11:31:57.854775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.233 [2024-11-20 11:31:57.854804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.233 [2024-11-20 11:31:57.854815] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.233 "name": "raid_bdev1", 00:19:50.233 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:50.233 "strip_size_kb": 64, 00:19:50.233 "state": "online", 00:19:50.233 "raid_level": "raid5f", 00:19:50.233 "superblock": true, 00:19:50.233 "num_base_bdevs": 3, 00:19:50.233 "num_base_bdevs_discovered": 2, 00:19:50.233 "num_base_bdevs_operational": 2, 00:19:50.233 "base_bdevs_list": [ 00:19:50.233 { 00:19:50.233 "name": null, 00:19:50.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.233 "is_configured": false, 00:19:50.233 "data_offset": 0, 00:19:50.233 "data_size": 63488 00:19:50.233 }, 00:19:50.233 { 00:19:50.233 "name": "BaseBdev2", 00:19:50.233 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:50.233 "is_configured": true, 00:19:50.233 "data_offset": 2048, 00:19:50.233 "data_size": 63488 00:19:50.233 }, 00:19:50.233 { 00:19:50.233 "name": "BaseBdev3", 00:19:50.233 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:50.233 "is_configured": true, 00:19:50.233 "data_offset": 2048, 00:19:50.233 "data_size": 63488 00:19:50.233 } 00:19:50.233 ] 00:19:50.233 }' 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.233 11:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.801 "name": "raid_bdev1", 00:19:50.801 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:50.801 "strip_size_kb": 64, 00:19:50.801 "state": "online", 00:19:50.801 "raid_level": "raid5f", 00:19:50.801 "superblock": true, 00:19:50.801 "num_base_bdevs": 3, 00:19:50.801 "num_base_bdevs_discovered": 2, 00:19:50.801 "num_base_bdevs_operational": 2, 00:19:50.801 "base_bdevs_list": [ 00:19:50.801 { 00:19:50.801 "name": null, 00:19:50.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.801 "is_configured": false, 00:19:50.801 "data_offset": 0, 00:19:50.801 "data_size": 63488 00:19:50.801 }, 00:19:50.801 { 00:19:50.801 "name": "BaseBdev2", 00:19:50.801 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:50.801 "is_configured": true, 00:19:50.801 "data_offset": 2048, 00:19:50.801 "data_size": 63488 00:19:50.801 }, 00:19:50.801 { 00:19:50.801 "name": "BaseBdev3", 00:19:50.801 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:50.801 "is_configured": true, 00:19:50.801 "data_offset": 2048, 00:19:50.801 "data_size": 63488 00:19:50.801 } 00:19:50.801 ] 00:19:50.801 }' 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.801 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.802 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.802 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.802 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.802 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.802 [2024-11-20 11:31:58.570045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.802 [2024-11-20 11:31:58.585046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:50.802 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.802 11:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:50.802 [2024-11-20 11:31:58.592537] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.178 "name": "raid_bdev1", 00:19:52.178 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:52.178 "strip_size_kb": 64, 00:19:52.178 "state": "online", 00:19:52.178 "raid_level": "raid5f", 00:19:52.178 "superblock": true, 00:19:52.178 "num_base_bdevs": 3, 00:19:52.178 "num_base_bdevs_discovered": 3, 00:19:52.178 "num_base_bdevs_operational": 3, 00:19:52.178 "process": { 00:19:52.178 "type": "rebuild", 00:19:52.178 "target": "spare", 00:19:52.178 "progress": { 00:19:52.178 "blocks": 18432, 00:19:52.178 "percent": 14 00:19:52.178 } 00:19:52.178 }, 00:19:52.178 "base_bdevs_list": [ 00:19:52.178 { 00:19:52.178 "name": "spare", 00:19:52.178 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:52.178 "is_configured": true, 00:19:52.178 "data_offset": 2048, 00:19:52.178 "data_size": 63488 00:19:52.178 }, 00:19:52.178 { 00:19:52.178 "name": "BaseBdev2", 00:19:52.178 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:52.178 "is_configured": true, 00:19:52.178 "data_offset": 2048, 00:19:52.178 "data_size": 63488 00:19:52.178 }, 00:19:52.178 { 00:19:52.178 "name": "BaseBdev3", 00:19:52.178 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:52.178 "is_configured": true, 00:19:52.178 "data_offset": 2048, 00:19:52.178 "data_size": 63488 00:19:52.178 } 00:19:52.178 ] 00:19:52.178 }' 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:52.178 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.178 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.179 "name": "raid_bdev1", 00:19:52.179 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:52.179 "strip_size_kb": 64, 00:19:52.179 "state": "online", 00:19:52.179 "raid_level": "raid5f", 00:19:52.179 "superblock": true, 00:19:52.179 "num_base_bdevs": 3, 00:19:52.179 "num_base_bdevs_discovered": 3, 00:19:52.179 "num_base_bdevs_operational": 3, 00:19:52.179 "process": { 00:19:52.179 "type": "rebuild", 00:19:52.179 "target": "spare", 00:19:52.179 "progress": { 00:19:52.179 "blocks": 22528, 00:19:52.179 "percent": 17 00:19:52.179 } 00:19:52.179 }, 00:19:52.179 "base_bdevs_list": [ 00:19:52.179 { 00:19:52.179 "name": "spare", 00:19:52.179 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:52.179 "is_configured": true, 00:19:52.179 "data_offset": 2048, 00:19:52.179 "data_size": 63488 00:19:52.179 }, 00:19:52.179 { 00:19:52.179 "name": "BaseBdev2", 00:19:52.179 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:52.179 "is_configured": true, 00:19:52.179 "data_offset": 2048, 00:19:52.179 "data_size": 63488 00:19:52.179 }, 00:19:52.179 { 00:19:52.179 "name": "BaseBdev3", 00:19:52.179 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:52.179 "is_configured": true, 00:19:52.179 "data_offset": 2048, 00:19:52.179 "data_size": 63488 00:19:52.179 } 00:19:52.179 ] 00:19:52.179 }' 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.179 11:31:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.114 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.372 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.372 "name": "raid_bdev1", 00:19:53.372 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:53.372 "strip_size_kb": 64, 00:19:53.372 "state": "online", 00:19:53.372 "raid_level": "raid5f", 00:19:53.372 "superblock": true, 00:19:53.372 "num_base_bdevs": 3, 00:19:53.372 "num_base_bdevs_discovered": 3, 00:19:53.372 "num_base_bdevs_operational": 3, 00:19:53.372 "process": { 00:19:53.372 "type": "rebuild", 00:19:53.372 "target": "spare", 00:19:53.372 "progress": { 00:19:53.372 "blocks": 47104, 00:19:53.372 "percent": 37 00:19:53.372 } 00:19:53.372 }, 00:19:53.372 "base_bdevs_list": [ 00:19:53.372 { 00:19:53.372 "name": "spare", 00:19:53.372 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:53.372 "is_configured": true, 00:19:53.372 "data_offset": 2048, 00:19:53.372 "data_size": 63488 00:19:53.372 }, 00:19:53.372 { 00:19:53.372 "name": "BaseBdev2", 00:19:53.372 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:53.372 "is_configured": true, 00:19:53.372 "data_offset": 2048, 00:19:53.372 "data_size": 63488 00:19:53.372 }, 00:19:53.372 { 00:19:53.372 "name": "BaseBdev3", 00:19:53.372 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:53.373 "is_configured": true, 00:19:53.373 "data_offset": 2048, 00:19:53.373 "data_size": 63488 00:19:53.373 } 00:19:53.373 ] 00:19:53.373 }' 00:19:53.373 11:32:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.373 11:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.373 11:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.373 11:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.373 11:32:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.309 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.310 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.310 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.310 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.310 "name": "raid_bdev1", 00:19:54.310 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:54.310 "strip_size_kb": 64, 00:19:54.310 "state": "online", 00:19:54.310 "raid_level": "raid5f", 00:19:54.310 "superblock": true, 00:19:54.310 "num_base_bdevs": 3, 00:19:54.310 "num_base_bdevs_discovered": 3, 00:19:54.310 "num_base_bdevs_operational": 3, 00:19:54.310 "process": { 00:19:54.310 "type": "rebuild", 00:19:54.310 "target": "spare", 00:19:54.310 "progress": { 00:19:54.310 "blocks": 69632, 00:19:54.310 "percent": 54 00:19:54.310 } 00:19:54.310 }, 00:19:54.310 "base_bdevs_list": [ 00:19:54.310 { 00:19:54.310 "name": "spare", 00:19:54.310 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:54.310 "is_configured": true, 00:19:54.310 "data_offset": 2048, 00:19:54.310 "data_size": 63488 00:19:54.310 }, 00:19:54.310 { 00:19:54.310 "name": "BaseBdev2", 00:19:54.310 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:54.310 "is_configured": true, 00:19:54.310 "data_offset": 2048, 00:19:54.310 "data_size": 63488 00:19:54.310 }, 00:19:54.310 { 00:19:54.310 "name": "BaseBdev3", 00:19:54.310 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:54.310 "is_configured": true, 00:19:54.310 "data_offset": 2048, 00:19:54.310 "data_size": 63488 00:19:54.310 } 00:19:54.310 ] 00:19:54.310 }' 00:19:54.310 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.617 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.617 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.617 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.617 11:32:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.554 "name": "raid_bdev1", 00:19:55.554 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:55.554 "strip_size_kb": 64, 00:19:55.554 "state": "online", 00:19:55.554 "raid_level": "raid5f", 00:19:55.554 "superblock": true, 00:19:55.554 "num_base_bdevs": 3, 00:19:55.554 "num_base_bdevs_discovered": 3, 00:19:55.554 "num_base_bdevs_operational": 3, 00:19:55.554 "process": { 00:19:55.554 "type": "rebuild", 00:19:55.554 "target": "spare", 00:19:55.554 "progress": { 00:19:55.554 "blocks": 94208, 00:19:55.554 "percent": 74 00:19:55.554 } 00:19:55.554 }, 00:19:55.554 "base_bdevs_list": [ 00:19:55.554 { 00:19:55.554 "name": "spare", 00:19:55.554 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:55.554 "is_configured": true, 00:19:55.554 "data_offset": 2048, 00:19:55.554 "data_size": 63488 00:19:55.554 }, 00:19:55.554 { 00:19:55.554 "name": "BaseBdev2", 00:19:55.554 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:55.554 "is_configured": true, 00:19:55.554 "data_offset": 2048, 00:19:55.554 "data_size": 63488 00:19:55.554 }, 00:19:55.554 { 00:19:55.554 "name": "BaseBdev3", 00:19:55.554 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:55.554 "is_configured": true, 00:19:55.554 "data_offset": 2048, 00:19:55.554 "data_size": 63488 00:19:55.554 } 00:19:55.554 ] 00:19:55.554 }' 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.554 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.814 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.814 11:32:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.750 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.750 "name": "raid_bdev1", 00:19:56.750 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:56.750 "strip_size_kb": 64, 00:19:56.750 "state": "online", 00:19:56.750 "raid_level": "raid5f", 00:19:56.750 "superblock": true, 00:19:56.750 "num_base_bdevs": 3, 00:19:56.750 "num_base_bdevs_discovered": 3, 00:19:56.750 "num_base_bdevs_operational": 3, 00:19:56.750 "process": { 00:19:56.750 "type": "rebuild", 00:19:56.750 "target": "spare", 00:19:56.750 "progress": { 00:19:56.750 "blocks": 116736, 00:19:56.750 "percent": 91 00:19:56.750 } 00:19:56.750 }, 00:19:56.750 "base_bdevs_list": [ 00:19:56.750 { 00:19:56.750 "name": "spare", 00:19:56.750 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:56.750 "is_configured": true, 00:19:56.750 "data_offset": 2048, 00:19:56.750 "data_size": 63488 00:19:56.750 }, 00:19:56.750 { 00:19:56.750 "name": "BaseBdev2", 00:19:56.750 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:56.750 "is_configured": true, 00:19:56.750 "data_offset": 2048, 00:19:56.750 "data_size": 63488 00:19:56.750 }, 00:19:56.750 { 00:19:56.750 "name": "BaseBdev3", 00:19:56.751 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:56.751 "is_configured": true, 00:19:56.751 "data_offset": 2048, 00:19:56.751 "data_size": 63488 00:19:56.751 } 00:19:56.751 ] 00:19:56.751 }' 00:19:56.751 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.751 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.751 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.751 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.751 11:32:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:57.319 [2024-11-20 11:32:04.867523] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:57.319 [2024-11-20 11:32:04.867704] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:57.319 [2024-11-20 11:32:04.867907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.887 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.887 "name": "raid_bdev1", 00:19:57.887 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:57.887 "strip_size_kb": 64, 00:19:57.887 "state": "online", 00:19:57.887 "raid_level": "raid5f", 00:19:57.887 "superblock": true, 00:19:57.887 "num_base_bdevs": 3, 00:19:57.887 "num_base_bdevs_discovered": 3, 00:19:57.887 "num_base_bdevs_operational": 3, 00:19:57.887 "base_bdevs_list": [ 00:19:57.887 { 00:19:57.887 "name": "spare", 00:19:57.887 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:57.887 "is_configured": true, 00:19:57.887 "data_offset": 2048, 00:19:57.887 "data_size": 63488 00:19:57.887 }, 00:19:57.887 { 00:19:57.887 "name": "BaseBdev2", 00:19:57.888 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:57.888 "is_configured": true, 00:19:57.888 "data_offset": 2048, 00:19:57.888 "data_size": 63488 00:19:57.888 }, 00:19:57.888 { 00:19:57.888 "name": "BaseBdev3", 00:19:57.888 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:57.888 "is_configured": true, 00:19:57.888 "data_offset": 2048, 00:19:57.888 "data_size": 63488 00:19:57.888 } 00:19:57.888 ] 00:19:57.888 }' 00:19:57.888 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.888 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:57.888 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.147 "name": "raid_bdev1", 00:19:58.147 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:58.147 "strip_size_kb": 64, 00:19:58.147 "state": "online", 00:19:58.147 "raid_level": "raid5f", 00:19:58.147 "superblock": true, 00:19:58.147 "num_base_bdevs": 3, 00:19:58.147 "num_base_bdevs_discovered": 3, 00:19:58.147 "num_base_bdevs_operational": 3, 00:19:58.147 "base_bdevs_list": [ 00:19:58.147 { 00:19:58.147 "name": "spare", 00:19:58.147 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:58.147 "is_configured": true, 00:19:58.147 "data_offset": 2048, 00:19:58.147 "data_size": 63488 00:19:58.147 }, 00:19:58.147 { 00:19:58.147 "name": "BaseBdev2", 00:19:58.147 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:58.147 "is_configured": true, 00:19:58.147 "data_offset": 2048, 00:19:58.147 "data_size": 63488 00:19:58.147 }, 00:19:58.147 { 00:19:58.147 "name": "BaseBdev3", 00:19:58.147 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:58.147 "is_configured": true, 00:19:58.147 "data_offset": 2048, 00:19:58.147 "data_size": 63488 00:19:58.147 } 00:19:58.147 ] 00:19:58.147 }' 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.147 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.148 "name": "raid_bdev1", 00:19:58.148 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:19:58.148 "strip_size_kb": 64, 00:19:58.148 "state": "online", 00:19:58.148 "raid_level": "raid5f", 00:19:58.148 "superblock": true, 00:19:58.148 "num_base_bdevs": 3, 00:19:58.148 "num_base_bdevs_discovered": 3, 00:19:58.148 "num_base_bdevs_operational": 3, 00:19:58.148 "base_bdevs_list": [ 00:19:58.148 { 00:19:58.148 "name": "spare", 00:19:58.148 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:19:58.148 "is_configured": true, 00:19:58.148 "data_offset": 2048, 00:19:58.148 "data_size": 63488 00:19:58.148 }, 00:19:58.148 { 00:19:58.148 "name": "BaseBdev2", 00:19:58.148 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:19:58.148 "is_configured": true, 00:19:58.148 "data_offset": 2048, 00:19:58.148 "data_size": 63488 00:19:58.148 }, 00:19:58.148 { 00:19:58.148 "name": "BaseBdev3", 00:19:58.148 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:19:58.148 "is_configured": true, 00:19:58.148 "data_offset": 2048, 00:19:58.148 "data_size": 63488 00:19:58.148 } 00:19:58.148 ] 00:19:58.148 }' 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.148 11:32:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.715 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:58.715 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.715 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.715 [2024-11-20 11:32:06.427482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.715 [2024-11-20 11:32:06.427521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.715 [2024-11-20 11:32:06.427659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.716 [2024-11-20 11:32:06.427776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.716 [2024-11-20 11:32:06.427810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.716 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:58.974 /dev/nbd0 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:59.233 1+0 records in 00:19:59.233 1+0 records out 00:19:59.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253647 s, 16.1 MB/s 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:59.233 11:32:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:59.492 /dev/nbd1 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:59.492 1+0 records in 00:19:59.492 1+0 records out 00:19:59.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539984 s, 7.6 MB/s 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:59.492 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:59.750 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:59.750 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:59.750 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:59.750 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.750 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:59.750 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.750 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:00.009 11:32:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.268 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.527 [2024-11-20 11:32:08.125921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:00.527 [2024-11-20 11:32:08.126057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.527 [2024-11-20 11:32:08.126085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:00.527 [2024-11-20 11:32:08.126114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.527 [2024-11-20 11:32:08.129050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.527 [2024-11-20 11:32:08.129128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:00.527 [2024-11-20 11:32:08.129231] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:00.527 [2024-11-20 11:32:08.129321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.527 [2024-11-20 11:32:08.129528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:00.527 [2024-11-20 11:32:08.129707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.527 spare 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.527 [2024-11-20 11:32:08.229876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:00.527 [2024-11-20 11:32:08.229972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:00.527 [2024-11-20 11:32:08.230439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:20:00.527 [2024-11-20 11:32:08.235536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:00.527 [2024-11-20 11:32:08.235567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:00.527 [2024-11-20 11:32:08.235852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.527 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.528 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.528 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.528 "name": "raid_bdev1", 00:20:00.528 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:00.528 "strip_size_kb": 64, 00:20:00.528 "state": "online", 00:20:00.528 "raid_level": "raid5f", 00:20:00.528 "superblock": true, 00:20:00.528 "num_base_bdevs": 3, 00:20:00.528 "num_base_bdevs_discovered": 3, 00:20:00.528 "num_base_bdevs_operational": 3, 00:20:00.528 "base_bdevs_list": [ 00:20:00.528 { 00:20:00.528 "name": "spare", 00:20:00.528 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:20:00.528 "is_configured": true, 00:20:00.528 "data_offset": 2048, 00:20:00.528 "data_size": 63488 00:20:00.528 }, 00:20:00.528 { 00:20:00.528 "name": "BaseBdev2", 00:20:00.528 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:00.528 "is_configured": true, 00:20:00.528 "data_offset": 2048, 00:20:00.528 "data_size": 63488 00:20:00.528 }, 00:20:00.528 { 00:20:00.528 "name": "BaseBdev3", 00:20:00.528 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:00.528 "is_configured": true, 00:20:00.528 "data_offset": 2048, 00:20:00.528 "data_size": 63488 00:20:00.528 } 00:20:00.528 ] 00:20:00.528 }' 00:20:00.528 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.528 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.096 "name": "raid_bdev1", 00:20:01.096 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:01.096 "strip_size_kb": 64, 00:20:01.096 "state": "online", 00:20:01.096 "raid_level": "raid5f", 00:20:01.096 "superblock": true, 00:20:01.096 "num_base_bdevs": 3, 00:20:01.096 "num_base_bdevs_discovered": 3, 00:20:01.096 "num_base_bdevs_operational": 3, 00:20:01.096 "base_bdevs_list": [ 00:20:01.096 { 00:20:01.096 "name": "spare", 00:20:01.096 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:20:01.096 "is_configured": true, 00:20:01.096 "data_offset": 2048, 00:20:01.096 "data_size": 63488 00:20:01.096 }, 00:20:01.096 { 00:20:01.096 "name": "BaseBdev2", 00:20:01.096 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:01.096 "is_configured": true, 00:20:01.096 "data_offset": 2048, 00:20:01.096 "data_size": 63488 00:20:01.096 }, 00:20:01.096 { 00:20:01.096 "name": "BaseBdev3", 00:20:01.096 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:01.096 "is_configured": true, 00:20:01.096 "data_offset": 2048, 00:20:01.096 "data_size": 63488 00:20:01.096 } 00:20:01.096 ] 00:20:01.096 }' 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:01.096 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 [2024-11-20 11:32:08.978044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.355 11:32:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.355 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.355 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.355 "name": "raid_bdev1", 00:20:01.355 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:01.355 "strip_size_kb": 64, 00:20:01.355 "state": "online", 00:20:01.355 "raid_level": "raid5f", 00:20:01.355 "superblock": true, 00:20:01.355 "num_base_bdevs": 3, 00:20:01.355 "num_base_bdevs_discovered": 2, 00:20:01.356 "num_base_bdevs_operational": 2, 00:20:01.356 "base_bdevs_list": [ 00:20:01.356 { 00:20:01.356 "name": null, 00:20:01.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.356 "is_configured": false, 00:20:01.356 "data_offset": 0, 00:20:01.356 "data_size": 63488 00:20:01.356 }, 00:20:01.356 { 00:20:01.356 "name": "BaseBdev2", 00:20:01.356 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:01.356 "is_configured": true, 00:20:01.356 "data_offset": 2048, 00:20:01.356 "data_size": 63488 00:20:01.356 }, 00:20:01.356 { 00:20:01.356 "name": "BaseBdev3", 00:20:01.356 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:01.356 "is_configured": true, 00:20:01.356 "data_offset": 2048, 00:20:01.356 "data_size": 63488 00:20:01.356 } 00:20:01.356 ] 00:20:01.356 }' 00:20:01.356 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.356 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.923 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:01.923 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.923 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.923 [2024-11-20 11:32:09.538316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.923 [2024-11-20 11:32:09.538566] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:01.923 [2024-11-20 11:32:09.538635] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:01.923 [2024-11-20 11:32:09.538711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.923 [2024-11-20 11:32:09.553337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:20:01.923 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.923 11:32:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:01.923 [2024-11-20 11:32:09.560591] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:02.894 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.894 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.894 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.894 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.894 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.894 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.894 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.895 "name": "raid_bdev1", 00:20:02.895 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:02.895 "strip_size_kb": 64, 00:20:02.895 "state": "online", 00:20:02.895 "raid_level": "raid5f", 00:20:02.895 "superblock": true, 00:20:02.895 "num_base_bdevs": 3, 00:20:02.895 "num_base_bdevs_discovered": 3, 00:20:02.895 "num_base_bdevs_operational": 3, 00:20:02.895 "process": { 00:20:02.895 "type": "rebuild", 00:20:02.895 "target": "spare", 00:20:02.895 "progress": { 00:20:02.895 "blocks": 18432, 00:20:02.895 "percent": 14 00:20:02.895 } 00:20:02.895 }, 00:20:02.895 "base_bdevs_list": [ 00:20:02.895 { 00:20:02.895 "name": "spare", 00:20:02.895 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:20:02.895 "is_configured": true, 00:20:02.895 "data_offset": 2048, 00:20:02.895 "data_size": 63488 00:20:02.895 }, 00:20:02.895 { 00:20:02.895 "name": "BaseBdev2", 00:20:02.895 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:02.895 "is_configured": true, 00:20:02.895 "data_offset": 2048, 00:20:02.895 "data_size": 63488 00:20:02.895 }, 00:20:02.895 { 00:20:02.895 "name": "BaseBdev3", 00:20:02.895 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:02.895 "is_configured": true, 00:20:02.895 "data_offset": 2048, 00:20:02.895 "data_size": 63488 00:20:02.895 } 00:20:02.895 ] 00:20:02.895 }' 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.895 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.895 [2024-11-20 11:32:10.726522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:03.155 [2024-11-20 11:32:10.777096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:03.155 [2024-11-20 11:32:10.777234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.155 [2024-11-20 11:32:10.777261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:03.155 [2024-11-20 11:32:10.777276] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.155 "name": "raid_bdev1", 00:20:03.155 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:03.155 "strip_size_kb": 64, 00:20:03.155 "state": "online", 00:20:03.155 "raid_level": "raid5f", 00:20:03.155 "superblock": true, 00:20:03.155 "num_base_bdevs": 3, 00:20:03.155 "num_base_bdevs_discovered": 2, 00:20:03.155 "num_base_bdevs_operational": 2, 00:20:03.155 "base_bdevs_list": [ 00:20:03.155 { 00:20:03.155 "name": null, 00:20:03.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.155 "is_configured": false, 00:20:03.155 "data_offset": 0, 00:20:03.155 "data_size": 63488 00:20:03.155 }, 00:20:03.155 { 00:20:03.155 "name": "BaseBdev2", 00:20:03.155 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:03.155 "is_configured": true, 00:20:03.155 "data_offset": 2048, 00:20:03.155 "data_size": 63488 00:20:03.155 }, 00:20:03.155 { 00:20:03.155 "name": "BaseBdev3", 00:20:03.155 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:03.155 "is_configured": true, 00:20:03.155 "data_offset": 2048, 00:20:03.155 "data_size": 63488 00:20:03.155 } 00:20:03.155 ] 00:20:03.155 }' 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.155 11:32:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.723 11:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:03.723 11:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.723 11:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.723 [2024-11-20 11:32:11.344030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:03.723 [2024-11-20 11:32:11.344154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.723 [2024-11-20 11:32:11.344184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:03.723 [2024-11-20 11:32:11.344204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.723 [2024-11-20 11:32:11.344828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.723 [2024-11-20 11:32:11.344878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:03.723 [2024-11-20 11:32:11.345040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:03.723 [2024-11-20 11:32:11.345069] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:03.723 [2024-11-20 11:32:11.345083] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:03.723 [2024-11-20 11:32:11.345127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.723 [2024-11-20 11:32:11.360768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:20:03.723 spare 00:20:03.723 11:32:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.723 11:32:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:03.723 [2024-11-20 11:32:11.368316] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:04.658 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.658 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.659 "name": "raid_bdev1", 00:20:04.659 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:04.659 "strip_size_kb": 64, 00:20:04.659 "state": "online", 00:20:04.659 "raid_level": "raid5f", 00:20:04.659 "superblock": true, 00:20:04.659 "num_base_bdevs": 3, 00:20:04.659 "num_base_bdevs_discovered": 3, 00:20:04.659 "num_base_bdevs_operational": 3, 00:20:04.659 "process": { 00:20:04.659 "type": "rebuild", 00:20:04.659 "target": "spare", 00:20:04.659 "progress": { 00:20:04.659 "blocks": 18432, 00:20:04.659 "percent": 14 00:20:04.659 } 00:20:04.659 }, 00:20:04.659 "base_bdevs_list": [ 00:20:04.659 { 00:20:04.659 "name": "spare", 00:20:04.659 "uuid": "534d523d-f6bb-503f-8c91-e9ed281c37e9", 00:20:04.659 "is_configured": true, 00:20:04.659 "data_offset": 2048, 00:20:04.659 "data_size": 63488 00:20:04.659 }, 00:20:04.659 { 00:20:04.659 "name": "BaseBdev2", 00:20:04.659 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:04.659 "is_configured": true, 00:20:04.659 "data_offset": 2048, 00:20:04.659 "data_size": 63488 00:20:04.659 }, 00:20:04.659 { 00:20:04.659 "name": "BaseBdev3", 00:20:04.659 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:04.659 "is_configured": true, 00:20:04.659 "data_offset": 2048, 00:20:04.659 "data_size": 63488 00:20:04.659 } 00:20:04.659 ] 00:20:04.659 }' 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.659 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.918 [2024-11-20 11:32:12.530241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.918 [2024-11-20 11:32:12.583883] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:04.918 [2024-11-20 11:32:12.583988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.918 [2024-11-20 11:32:12.584018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.918 [2024-11-20 11:32:12.584030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.918 "name": "raid_bdev1", 00:20:04.918 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:04.918 "strip_size_kb": 64, 00:20:04.918 "state": "online", 00:20:04.918 "raid_level": "raid5f", 00:20:04.918 "superblock": true, 00:20:04.918 "num_base_bdevs": 3, 00:20:04.918 "num_base_bdevs_discovered": 2, 00:20:04.918 "num_base_bdevs_operational": 2, 00:20:04.918 "base_bdevs_list": [ 00:20:04.918 { 00:20:04.918 "name": null, 00:20:04.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.918 "is_configured": false, 00:20:04.918 "data_offset": 0, 00:20:04.918 "data_size": 63488 00:20:04.918 }, 00:20:04.918 { 00:20:04.918 "name": "BaseBdev2", 00:20:04.918 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:04.918 "is_configured": true, 00:20:04.918 "data_offset": 2048, 00:20:04.918 "data_size": 63488 00:20:04.918 }, 00:20:04.918 { 00:20:04.918 "name": "BaseBdev3", 00:20:04.918 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:04.918 "is_configured": true, 00:20:04.918 "data_offset": 2048, 00:20:04.918 "data_size": 63488 00:20:04.918 } 00:20:04.918 ] 00:20:04.918 }' 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.918 11:32:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.487 "name": "raid_bdev1", 00:20:05.487 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:05.487 "strip_size_kb": 64, 00:20:05.487 "state": "online", 00:20:05.487 "raid_level": "raid5f", 00:20:05.487 "superblock": true, 00:20:05.487 "num_base_bdevs": 3, 00:20:05.487 "num_base_bdevs_discovered": 2, 00:20:05.487 "num_base_bdevs_operational": 2, 00:20:05.487 "base_bdevs_list": [ 00:20:05.487 { 00:20:05.487 "name": null, 00:20:05.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.487 "is_configured": false, 00:20:05.487 "data_offset": 0, 00:20:05.487 "data_size": 63488 00:20:05.487 }, 00:20:05.487 { 00:20:05.487 "name": "BaseBdev2", 00:20:05.487 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:05.487 "is_configured": true, 00:20:05.487 "data_offset": 2048, 00:20:05.487 "data_size": 63488 00:20:05.487 }, 00:20:05.487 { 00:20:05.487 "name": "BaseBdev3", 00:20:05.487 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:05.487 "is_configured": true, 00:20:05.487 "data_offset": 2048, 00:20:05.487 "data_size": 63488 00:20:05.487 } 00:20:05.487 ] 00:20:05.487 }' 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.487 [2024-11-20 11:32:13.290067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:05.487 [2024-11-20 11:32:13.290171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.487 [2024-11-20 11:32:13.290209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:05.487 [2024-11-20 11:32:13.290225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.487 [2024-11-20 11:32:13.290832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.487 [2024-11-20 11:32:13.290874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:05.487 [2024-11-20 11:32:13.290991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:05.487 [2024-11-20 11:32:13.291018] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:05.487 [2024-11-20 11:32:13.291044] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:05.487 [2024-11-20 11:32:13.291057] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:05.487 BaseBdev1 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.487 11:32:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.866 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.867 "name": "raid_bdev1", 00:20:06.867 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:06.867 "strip_size_kb": 64, 00:20:06.867 "state": "online", 00:20:06.867 "raid_level": "raid5f", 00:20:06.867 "superblock": true, 00:20:06.867 "num_base_bdevs": 3, 00:20:06.867 "num_base_bdevs_discovered": 2, 00:20:06.867 "num_base_bdevs_operational": 2, 00:20:06.867 "base_bdevs_list": [ 00:20:06.867 { 00:20:06.867 "name": null, 00:20:06.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.867 "is_configured": false, 00:20:06.867 "data_offset": 0, 00:20:06.867 "data_size": 63488 00:20:06.867 }, 00:20:06.867 { 00:20:06.867 "name": "BaseBdev2", 00:20:06.867 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:06.867 "is_configured": true, 00:20:06.867 "data_offset": 2048, 00:20:06.867 "data_size": 63488 00:20:06.867 }, 00:20:06.867 { 00:20:06.867 "name": "BaseBdev3", 00:20:06.867 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:06.867 "is_configured": true, 00:20:06.867 "data_offset": 2048, 00:20:06.867 "data_size": 63488 00:20:06.867 } 00:20:06.867 ] 00:20:06.867 }' 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.867 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.126 "name": "raid_bdev1", 00:20:07.126 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:07.126 "strip_size_kb": 64, 00:20:07.126 "state": "online", 00:20:07.126 "raid_level": "raid5f", 00:20:07.126 "superblock": true, 00:20:07.126 "num_base_bdevs": 3, 00:20:07.126 "num_base_bdevs_discovered": 2, 00:20:07.126 "num_base_bdevs_operational": 2, 00:20:07.126 "base_bdevs_list": [ 00:20:07.126 { 00:20:07.126 "name": null, 00:20:07.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.126 "is_configured": false, 00:20:07.126 "data_offset": 0, 00:20:07.126 "data_size": 63488 00:20:07.126 }, 00:20:07.126 { 00:20:07.126 "name": "BaseBdev2", 00:20:07.126 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:07.126 "is_configured": true, 00:20:07.126 "data_offset": 2048, 00:20:07.126 "data_size": 63488 00:20:07.126 }, 00:20:07.126 { 00:20:07.126 "name": "BaseBdev3", 00:20:07.126 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:07.126 "is_configured": true, 00:20:07.126 "data_offset": 2048, 00:20:07.126 "data_size": 63488 00:20:07.126 } 00:20:07.126 ] 00:20:07.126 }' 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:07.126 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.387 11:32:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.387 [2024-11-20 11:32:14.998708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.387 [2024-11-20 11:32:14.998938] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:07.387 [2024-11-20 11:32:14.998971] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:07.387 request: 00:20:07.387 { 00:20:07.387 "base_bdev": "BaseBdev1", 00:20:07.387 "raid_bdev": "raid_bdev1", 00:20:07.387 "method": "bdev_raid_add_base_bdev", 00:20:07.387 "req_id": 1 00:20:07.387 } 00:20:07.387 Got JSON-RPC error response 00:20:07.387 response: 00:20:07.387 { 00:20:07.387 "code": -22, 00:20:07.387 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:07.387 } 00:20:07.387 11:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:07.387 11:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:07.387 11:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.387 11:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.387 11:32:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.387 11:32:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.324 "name": "raid_bdev1", 00:20:08.324 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:08.324 "strip_size_kb": 64, 00:20:08.324 "state": "online", 00:20:08.324 "raid_level": "raid5f", 00:20:08.324 "superblock": true, 00:20:08.324 "num_base_bdevs": 3, 00:20:08.324 "num_base_bdevs_discovered": 2, 00:20:08.324 "num_base_bdevs_operational": 2, 00:20:08.324 "base_bdevs_list": [ 00:20:08.324 { 00:20:08.324 "name": null, 00:20:08.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.324 "is_configured": false, 00:20:08.324 "data_offset": 0, 00:20:08.324 "data_size": 63488 00:20:08.324 }, 00:20:08.324 { 00:20:08.324 "name": "BaseBdev2", 00:20:08.324 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:08.324 "is_configured": true, 00:20:08.324 "data_offset": 2048, 00:20:08.324 "data_size": 63488 00:20:08.324 }, 00:20:08.324 { 00:20:08.324 "name": "BaseBdev3", 00:20:08.324 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:08.324 "is_configured": true, 00:20:08.324 "data_offset": 2048, 00:20:08.324 "data_size": 63488 00:20:08.324 } 00:20:08.324 ] 00:20:08.324 }' 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.324 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.892 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.892 "name": "raid_bdev1", 00:20:08.892 "uuid": "6a175732-5d33-4feb-8ef1-9a87097e988c", 00:20:08.892 "strip_size_kb": 64, 00:20:08.892 "state": "online", 00:20:08.892 "raid_level": "raid5f", 00:20:08.892 "superblock": true, 00:20:08.892 "num_base_bdevs": 3, 00:20:08.892 "num_base_bdevs_discovered": 2, 00:20:08.892 "num_base_bdevs_operational": 2, 00:20:08.892 "base_bdevs_list": [ 00:20:08.892 { 00:20:08.892 "name": null, 00:20:08.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.893 "is_configured": false, 00:20:08.893 "data_offset": 0, 00:20:08.893 "data_size": 63488 00:20:08.893 }, 00:20:08.893 { 00:20:08.893 "name": "BaseBdev2", 00:20:08.893 "uuid": "3fc29a71-68ff-5693-94c3-92075275fae3", 00:20:08.893 "is_configured": true, 00:20:08.893 "data_offset": 2048, 00:20:08.893 "data_size": 63488 00:20:08.893 }, 00:20:08.893 { 00:20:08.893 "name": "BaseBdev3", 00:20:08.893 "uuid": "8e4f1a78-c0fc-5015-a1d5-9a43c1575dcd", 00:20:08.893 "is_configured": true, 00:20:08.893 "data_offset": 2048, 00:20:08.893 "data_size": 63488 00:20:08.893 } 00:20:08.893 ] 00:20:08.893 }' 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82302 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82302 ']' 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82302 00:20:08.893 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:09.151 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.151 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82302 00:20:09.151 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.151 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.151 killing process with pid 82302 00:20:09.151 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82302' 00:20:09.151 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82302 00:20:09.151 Received shutdown signal, test time was about 60.000000 seconds 00:20:09.151 00:20:09.151 Latency(us) 00:20:09.151 [2024-11-20T11:32:16.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.151 [2024-11-20T11:32:16.997Z] =================================================================================================================== 00:20:09.151 [2024-11-20T11:32:16.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.151 [2024-11-20 11:32:16.767202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.151 11:32:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82302 00:20:09.151 [2024-11-20 11:32:16.767377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.151 [2024-11-20 11:32:16.767465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.151 [2024-11-20 11:32:16.767486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:09.411 [2024-11-20 11:32:17.138004] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:10.789 11:32:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:10.789 00:20:10.789 real 0m25.277s 00:20:10.789 user 0m33.754s 00:20:10.789 sys 0m2.687s 00:20:10.789 11:32:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.789 11:32:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.789 ************************************ 00:20:10.789 END TEST raid5f_rebuild_test_sb 00:20:10.789 ************************************ 00:20:10.789 11:32:18 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:10.789 11:32:18 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:20:10.789 11:32:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:10.789 11:32:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.789 11:32:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:10.789 ************************************ 00:20:10.789 START TEST raid5f_state_function_test 00:20:10.789 ************************************ 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83063 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:10.789 Process raid pid: 83063 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83063' 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83063 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83063 ']' 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.789 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.790 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.790 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.790 11:32:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.790 [2024-11-20 11:32:18.361429] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:20:10.790 [2024-11-20 11:32:18.361606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.790 [2024-11-20 11:32:18.540471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.048 [2024-11-20 11:32:18.676719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.048 [2024-11-20 11:32:18.890725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:11.048 [2024-11-20 11:32:18.890789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.615 [2024-11-20 11:32:19.387186] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.615 [2024-11-20 11:32:19.387268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.615 [2024-11-20 11:32:19.387286] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.615 [2024-11-20 11:32:19.387303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.615 [2024-11-20 11:32:19.387314] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.615 [2024-11-20 11:32:19.387329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:11.615 [2024-11-20 11:32:19.387339] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:11.615 [2024-11-20 11:32:19.387354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.615 "name": "Existed_Raid", 00:20:11.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.615 "strip_size_kb": 64, 00:20:11.615 "state": "configuring", 00:20:11.615 "raid_level": "raid5f", 00:20:11.615 "superblock": false, 00:20:11.615 "num_base_bdevs": 4, 00:20:11.615 "num_base_bdevs_discovered": 0, 00:20:11.615 "num_base_bdevs_operational": 4, 00:20:11.615 "base_bdevs_list": [ 00:20:11.615 { 00:20:11.615 "name": "BaseBdev1", 00:20:11.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.615 "is_configured": false, 00:20:11.615 "data_offset": 0, 00:20:11.615 "data_size": 0 00:20:11.615 }, 00:20:11.615 { 00:20:11.615 "name": "BaseBdev2", 00:20:11.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.615 "is_configured": false, 00:20:11.615 "data_offset": 0, 00:20:11.615 "data_size": 0 00:20:11.615 }, 00:20:11.615 { 00:20:11.615 "name": "BaseBdev3", 00:20:11.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.615 "is_configured": false, 00:20:11.615 "data_offset": 0, 00:20:11.615 "data_size": 0 00:20:11.615 }, 00:20:11.615 { 00:20:11.615 "name": "BaseBdev4", 00:20:11.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.615 "is_configured": false, 00:20:11.615 "data_offset": 0, 00:20:11.615 "data_size": 0 00:20:11.615 } 00:20:11.615 ] 00:20:11.615 }' 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.615 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 [2024-11-20 11:32:19.951328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:12.184 [2024-11-20 11:32:19.951379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 [2024-11-20 11:32:19.959302] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:12.184 [2024-11-20 11:32:19.959371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:12.184 [2024-11-20 11:32:19.959386] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:12.184 [2024-11-20 11:32:19.959401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:12.184 [2024-11-20 11:32:19.959411] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:12.184 [2024-11-20 11:32:19.959425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:12.184 [2024-11-20 11:32:19.959434] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:12.184 [2024-11-20 11:32:19.959449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.184 11:32:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 [2024-11-20 11:32:20.003064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:12.184 BaseBdev1 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.184 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.184 [ 00:20:12.184 { 00:20:12.184 "name": "BaseBdev1", 00:20:12.184 "aliases": [ 00:20:12.184 "f38baf3c-4618-49e4-be56-f5571cac48ba" 00:20:12.184 ], 00:20:12.184 "product_name": "Malloc disk", 00:20:12.184 "block_size": 512, 00:20:12.184 "num_blocks": 65536, 00:20:12.184 "uuid": "f38baf3c-4618-49e4-be56-f5571cac48ba", 00:20:12.184 "assigned_rate_limits": { 00:20:12.184 "rw_ios_per_sec": 0, 00:20:12.184 "rw_mbytes_per_sec": 0, 00:20:12.184 "r_mbytes_per_sec": 0, 00:20:12.184 "w_mbytes_per_sec": 0 00:20:12.184 }, 00:20:12.184 "claimed": true, 00:20:12.184 "claim_type": "exclusive_write", 00:20:12.184 "zoned": false, 00:20:12.184 "supported_io_types": { 00:20:12.184 "read": true, 00:20:12.443 "write": true, 00:20:12.443 "unmap": true, 00:20:12.443 "flush": true, 00:20:12.443 "reset": true, 00:20:12.443 "nvme_admin": false, 00:20:12.443 "nvme_io": false, 00:20:12.443 "nvme_io_md": false, 00:20:12.443 "write_zeroes": true, 00:20:12.443 "zcopy": true, 00:20:12.443 "get_zone_info": false, 00:20:12.443 "zone_management": false, 00:20:12.443 "zone_append": false, 00:20:12.443 "compare": false, 00:20:12.443 "compare_and_write": false, 00:20:12.443 "abort": true, 00:20:12.443 "seek_hole": false, 00:20:12.443 "seek_data": false, 00:20:12.443 "copy": true, 00:20:12.443 "nvme_iov_md": false 00:20:12.443 }, 00:20:12.443 "memory_domains": [ 00:20:12.443 { 00:20:12.443 "dma_device_id": "system", 00:20:12.443 "dma_device_type": 1 00:20:12.443 }, 00:20:12.443 { 00:20:12.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.443 "dma_device_type": 2 00:20:12.443 } 00:20:12.443 ], 00:20:12.443 "driver_specific": {} 00:20:12.443 } 00:20:12.443 ] 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.443 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.444 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.444 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.444 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.444 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.444 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.444 "name": "Existed_Raid", 00:20:12.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.444 "strip_size_kb": 64, 00:20:12.444 "state": "configuring", 00:20:12.444 "raid_level": "raid5f", 00:20:12.444 "superblock": false, 00:20:12.444 "num_base_bdevs": 4, 00:20:12.444 "num_base_bdevs_discovered": 1, 00:20:12.444 "num_base_bdevs_operational": 4, 00:20:12.444 "base_bdevs_list": [ 00:20:12.444 { 00:20:12.444 "name": "BaseBdev1", 00:20:12.444 "uuid": "f38baf3c-4618-49e4-be56-f5571cac48ba", 00:20:12.444 "is_configured": true, 00:20:12.444 "data_offset": 0, 00:20:12.444 "data_size": 65536 00:20:12.444 }, 00:20:12.444 { 00:20:12.444 "name": "BaseBdev2", 00:20:12.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.444 "is_configured": false, 00:20:12.444 "data_offset": 0, 00:20:12.444 "data_size": 0 00:20:12.444 }, 00:20:12.444 { 00:20:12.444 "name": "BaseBdev3", 00:20:12.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.444 "is_configured": false, 00:20:12.444 "data_offset": 0, 00:20:12.444 "data_size": 0 00:20:12.444 }, 00:20:12.444 { 00:20:12.444 "name": "BaseBdev4", 00:20:12.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.444 "is_configured": false, 00:20:12.444 "data_offset": 0, 00:20:12.444 "data_size": 0 00:20:12.444 } 00:20:12.444 ] 00:20:12.444 }' 00:20:12.444 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.444 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.010 [2024-11-20 11:32:20.571329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:13.010 [2024-11-20 11:32:20.571423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.010 [2024-11-20 11:32:20.579441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.010 [2024-11-20 11:32:20.582453] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:13.010 [2024-11-20 11:32:20.582706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:13.010 [2024-11-20 11:32:20.582835] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:13.010 [2024-11-20 11:32:20.582872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:13.010 [2024-11-20 11:32:20.582886] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:13.010 [2024-11-20 11:32:20.582902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.010 "name": "Existed_Raid", 00:20:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.010 "strip_size_kb": 64, 00:20:13.010 "state": "configuring", 00:20:13.010 "raid_level": "raid5f", 00:20:13.010 "superblock": false, 00:20:13.010 "num_base_bdevs": 4, 00:20:13.010 "num_base_bdevs_discovered": 1, 00:20:13.010 "num_base_bdevs_operational": 4, 00:20:13.010 "base_bdevs_list": [ 00:20:13.010 { 00:20:13.010 "name": "BaseBdev1", 00:20:13.010 "uuid": "f38baf3c-4618-49e4-be56-f5571cac48ba", 00:20:13.010 "is_configured": true, 00:20:13.010 "data_offset": 0, 00:20:13.010 "data_size": 65536 00:20:13.010 }, 00:20:13.010 { 00:20:13.010 "name": "BaseBdev2", 00:20:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.010 "is_configured": false, 00:20:13.010 "data_offset": 0, 00:20:13.010 "data_size": 0 00:20:13.010 }, 00:20:13.010 { 00:20:13.010 "name": "BaseBdev3", 00:20:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.010 "is_configured": false, 00:20:13.010 "data_offset": 0, 00:20:13.010 "data_size": 0 00:20:13.010 }, 00:20:13.010 { 00:20:13.010 "name": "BaseBdev4", 00:20:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.010 "is_configured": false, 00:20:13.010 "data_offset": 0, 00:20:13.010 "data_size": 0 00:20:13.010 } 00:20:13.010 ] 00:20:13.010 }' 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.010 11:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.269 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.269 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.269 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 [2024-11-20 11:32:21.139959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:13.528 BaseBdev2 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.528 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.528 [ 00:20:13.528 { 00:20:13.528 "name": "BaseBdev2", 00:20:13.528 "aliases": [ 00:20:13.528 "e9339ffb-9759-4a7b-9372-da0deadf31b8" 00:20:13.528 ], 00:20:13.528 "product_name": "Malloc disk", 00:20:13.528 "block_size": 512, 00:20:13.528 "num_blocks": 65536, 00:20:13.528 "uuid": "e9339ffb-9759-4a7b-9372-da0deadf31b8", 00:20:13.528 "assigned_rate_limits": { 00:20:13.528 "rw_ios_per_sec": 0, 00:20:13.528 "rw_mbytes_per_sec": 0, 00:20:13.528 "r_mbytes_per_sec": 0, 00:20:13.528 "w_mbytes_per_sec": 0 00:20:13.528 }, 00:20:13.528 "claimed": true, 00:20:13.528 "claim_type": "exclusive_write", 00:20:13.528 "zoned": false, 00:20:13.528 "supported_io_types": { 00:20:13.528 "read": true, 00:20:13.528 "write": true, 00:20:13.528 "unmap": true, 00:20:13.529 "flush": true, 00:20:13.529 "reset": true, 00:20:13.529 "nvme_admin": false, 00:20:13.529 "nvme_io": false, 00:20:13.529 "nvme_io_md": false, 00:20:13.529 "write_zeroes": true, 00:20:13.529 "zcopy": true, 00:20:13.529 "get_zone_info": false, 00:20:13.529 "zone_management": false, 00:20:13.529 "zone_append": false, 00:20:13.529 "compare": false, 00:20:13.529 "compare_and_write": false, 00:20:13.529 "abort": true, 00:20:13.529 "seek_hole": false, 00:20:13.529 "seek_data": false, 00:20:13.529 "copy": true, 00:20:13.529 "nvme_iov_md": false 00:20:13.529 }, 00:20:13.529 "memory_domains": [ 00:20:13.529 { 00:20:13.529 "dma_device_id": "system", 00:20:13.529 "dma_device_type": 1 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.529 "dma_device_type": 2 00:20:13.529 } 00:20:13.529 ], 00:20:13.529 "driver_specific": {} 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.529 "name": "Existed_Raid", 00:20:13.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.529 "strip_size_kb": 64, 00:20:13.529 "state": "configuring", 00:20:13.529 "raid_level": "raid5f", 00:20:13.529 "superblock": false, 00:20:13.529 "num_base_bdevs": 4, 00:20:13.529 "num_base_bdevs_discovered": 2, 00:20:13.529 "num_base_bdevs_operational": 4, 00:20:13.529 "base_bdevs_list": [ 00:20:13.529 { 00:20:13.529 "name": "BaseBdev1", 00:20:13.529 "uuid": "f38baf3c-4618-49e4-be56-f5571cac48ba", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": "BaseBdev2", 00:20:13.529 "uuid": "e9339ffb-9759-4a7b-9372-da0deadf31b8", 00:20:13.529 "is_configured": true, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 65536 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": "BaseBdev3", 00:20:13.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.529 "is_configured": false, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 0 00:20:13.529 }, 00:20:13.529 { 00:20:13.529 "name": "BaseBdev4", 00:20:13.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.529 "is_configured": false, 00:20:13.529 "data_offset": 0, 00:20:13.529 "data_size": 0 00:20:13.529 } 00:20:13.529 ] 00:20:13.529 }' 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.529 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.096 [2024-11-20 11:32:21.737763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.096 BaseBdev3 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:14.096 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.097 [ 00:20:14.097 { 00:20:14.097 "name": "BaseBdev3", 00:20:14.097 "aliases": [ 00:20:14.097 "e5a7e3aa-fa67-4b57-afe2-b5bdd985bbd7" 00:20:14.097 ], 00:20:14.097 "product_name": "Malloc disk", 00:20:14.097 "block_size": 512, 00:20:14.097 "num_blocks": 65536, 00:20:14.097 "uuid": "e5a7e3aa-fa67-4b57-afe2-b5bdd985bbd7", 00:20:14.097 "assigned_rate_limits": { 00:20:14.097 "rw_ios_per_sec": 0, 00:20:14.097 "rw_mbytes_per_sec": 0, 00:20:14.097 "r_mbytes_per_sec": 0, 00:20:14.097 "w_mbytes_per_sec": 0 00:20:14.097 }, 00:20:14.097 "claimed": true, 00:20:14.097 "claim_type": "exclusive_write", 00:20:14.097 "zoned": false, 00:20:14.097 "supported_io_types": { 00:20:14.097 "read": true, 00:20:14.097 "write": true, 00:20:14.097 "unmap": true, 00:20:14.097 "flush": true, 00:20:14.097 "reset": true, 00:20:14.097 "nvme_admin": false, 00:20:14.097 "nvme_io": false, 00:20:14.097 "nvme_io_md": false, 00:20:14.097 "write_zeroes": true, 00:20:14.097 "zcopy": true, 00:20:14.097 "get_zone_info": false, 00:20:14.097 "zone_management": false, 00:20:14.097 "zone_append": false, 00:20:14.097 "compare": false, 00:20:14.097 "compare_and_write": false, 00:20:14.097 "abort": true, 00:20:14.097 "seek_hole": false, 00:20:14.097 "seek_data": false, 00:20:14.097 "copy": true, 00:20:14.097 "nvme_iov_md": false 00:20:14.097 }, 00:20:14.097 "memory_domains": [ 00:20:14.097 { 00:20:14.097 "dma_device_id": "system", 00:20:14.097 "dma_device_type": 1 00:20:14.097 }, 00:20:14.097 { 00:20:14.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.097 "dma_device_type": 2 00:20:14.097 } 00:20:14.097 ], 00:20:14.097 "driver_specific": {} 00:20:14.097 } 00:20:14.097 ] 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.097 "name": "Existed_Raid", 00:20:14.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.097 "strip_size_kb": 64, 00:20:14.097 "state": "configuring", 00:20:14.097 "raid_level": "raid5f", 00:20:14.097 "superblock": false, 00:20:14.097 "num_base_bdevs": 4, 00:20:14.097 "num_base_bdevs_discovered": 3, 00:20:14.097 "num_base_bdevs_operational": 4, 00:20:14.097 "base_bdevs_list": [ 00:20:14.097 { 00:20:14.097 "name": "BaseBdev1", 00:20:14.097 "uuid": "f38baf3c-4618-49e4-be56-f5571cac48ba", 00:20:14.097 "is_configured": true, 00:20:14.097 "data_offset": 0, 00:20:14.097 "data_size": 65536 00:20:14.097 }, 00:20:14.097 { 00:20:14.097 "name": "BaseBdev2", 00:20:14.097 "uuid": "e9339ffb-9759-4a7b-9372-da0deadf31b8", 00:20:14.097 "is_configured": true, 00:20:14.097 "data_offset": 0, 00:20:14.097 "data_size": 65536 00:20:14.097 }, 00:20:14.097 { 00:20:14.097 "name": "BaseBdev3", 00:20:14.097 "uuid": "e5a7e3aa-fa67-4b57-afe2-b5bdd985bbd7", 00:20:14.097 "is_configured": true, 00:20:14.097 "data_offset": 0, 00:20:14.097 "data_size": 65536 00:20:14.097 }, 00:20:14.097 { 00:20:14.097 "name": "BaseBdev4", 00:20:14.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.097 "is_configured": false, 00:20:14.097 "data_offset": 0, 00:20:14.097 "data_size": 0 00:20:14.097 } 00:20:14.097 ] 00:20:14.097 }' 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.097 11:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.665 [2024-11-20 11:32:22.348220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:14.665 [2024-11-20 11:32:22.348303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:14.665 [2024-11-20 11:32:22.348319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:14.665 [2024-11-20 11:32:22.348637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:14.665 [2024-11-20 11:32:22.355104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:14.665 [2024-11-20 11:32:22.355134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:14.665 [2024-11-20 11:32:22.355454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.665 BaseBdev4 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.665 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.665 [ 00:20:14.665 { 00:20:14.665 "name": "BaseBdev4", 00:20:14.665 "aliases": [ 00:20:14.665 "a971af16-3900-4c7e-9eeb-b2811cfc9d2e" 00:20:14.665 ], 00:20:14.665 "product_name": "Malloc disk", 00:20:14.665 "block_size": 512, 00:20:14.665 "num_blocks": 65536, 00:20:14.665 "uuid": "a971af16-3900-4c7e-9eeb-b2811cfc9d2e", 00:20:14.665 "assigned_rate_limits": { 00:20:14.665 "rw_ios_per_sec": 0, 00:20:14.665 "rw_mbytes_per_sec": 0, 00:20:14.665 "r_mbytes_per_sec": 0, 00:20:14.665 "w_mbytes_per_sec": 0 00:20:14.665 }, 00:20:14.665 "claimed": true, 00:20:14.665 "claim_type": "exclusive_write", 00:20:14.665 "zoned": false, 00:20:14.665 "supported_io_types": { 00:20:14.665 "read": true, 00:20:14.665 "write": true, 00:20:14.665 "unmap": true, 00:20:14.665 "flush": true, 00:20:14.665 "reset": true, 00:20:14.665 "nvme_admin": false, 00:20:14.665 "nvme_io": false, 00:20:14.665 "nvme_io_md": false, 00:20:14.665 "write_zeroes": true, 00:20:14.665 "zcopy": true, 00:20:14.665 "get_zone_info": false, 00:20:14.665 "zone_management": false, 00:20:14.665 "zone_append": false, 00:20:14.665 "compare": false, 00:20:14.665 "compare_and_write": false, 00:20:14.665 "abort": true, 00:20:14.665 "seek_hole": false, 00:20:14.665 "seek_data": false, 00:20:14.665 "copy": true, 00:20:14.665 "nvme_iov_md": false 00:20:14.665 }, 00:20:14.665 "memory_domains": [ 00:20:14.665 { 00:20:14.665 "dma_device_id": "system", 00:20:14.665 "dma_device_type": 1 00:20:14.665 }, 00:20:14.665 { 00:20:14.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.666 "dma_device_type": 2 00:20:14.666 } 00:20:14.666 ], 00:20:14.666 "driver_specific": {} 00:20:14.666 } 00:20:14.666 ] 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.666 "name": "Existed_Raid", 00:20:14.666 "uuid": "f5901db2-03e9-4547-89ef-6e72c97ab7c0", 00:20:14.666 "strip_size_kb": 64, 00:20:14.666 "state": "online", 00:20:14.666 "raid_level": "raid5f", 00:20:14.666 "superblock": false, 00:20:14.666 "num_base_bdevs": 4, 00:20:14.666 "num_base_bdevs_discovered": 4, 00:20:14.666 "num_base_bdevs_operational": 4, 00:20:14.666 "base_bdevs_list": [ 00:20:14.666 { 00:20:14.666 "name": "BaseBdev1", 00:20:14.666 "uuid": "f38baf3c-4618-49e4-be56-f5571cac48ba", 00:20:14.666 "is_configured": true, 00:20:14.666 "data_offset": 0, 00:20:14.666 "data_size": 65536 00:20:14.666 }, 00:20:14.666 { 00:20:14.666 "name": "BaseBdev2", 00:20:14.666 "uuid": "e9339ffb-9759-4a7b-9372-da0deadf31b8", 00:20:14.666 "is_configured": true, 00:20:14.666 "data_offset": 0, 00:20:14.666 "data_size": 65536 00:20:14.666 }, 00:20:14.666 { 00:20:14.666 "name": "BaseBdev3", 00:20:14.666 "uuid": "e5a7e3aa-fa67-4b57-afe2-b5bdd985bbd7", 00:20:14.666 "is_configured": true, 00:20:14.666 "data_offset": 0, 00:20:14.666 "data_size": 65536 00:20:14.666 }, 00:20:14.666 { 00:20:14.666 "name": "BaseBdev4", 00:20:14.666 "uuid": "a971af16-3900-4c7e-9eeb-b2811cfc9d2e", 00:20:14.666 "is_configured": true, 00:20:14.666 "data_offset": 0, 00:20:14.666 "data_size": 65536 00:20:14.666 } 00:20:14.666 ] 00:20:14.666 }' 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.666 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.234 [2024-11-20 11:32:22.919835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.234 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.234 "name": "Existed_Raid", 00:20:15.234 "aliases": [ 00:20:15.234 "f5901db2-03e9-4547-89ef-6e72c97ab7c0" 00:20:15.234 ], 00:20:15.234 "product_name": "Raid Volume", 00:20:15.234 "block_size": 512, 00:20:15.234 "num_blocks": 196608, 00:20:15.234 "uuid": "f5901db2-03e9-4547-89ef-6e72c97ab7c0", 00:20:15.234 "assigned_rate_limits": { 00:20:15.234 "rw_ios_per_sec": 0, 00:20:15.234 "rw_mbytes_per_sec": 0, 00:20:15.234 "r_mbytes_per_sec": 0, 00:20:15.234 "w_mbytes_per_sec": 0 00:20:15.234 }, 00:20:15.234 "claimed": false, 00:20:15.234 "zoned": false, 00:20:15.234 "supported_io_types": { 00:20:15.234 "read": true, 00:20:15.234 "write": true, 00:20:15.234 "unmap": false, 00:20:15.234 "flush": false, 00:20:15.234 "reset": true, 00:20:15.234 "nvme_admin": false, 00:20:15.234 "nvme_io": false, 00:20:15.234 "nvme_io_md": false, 00:20:15.234 "write_zeroes": true, 00:20:15.234 "zcopy": false, 00:20:15.234 "get_zone_info": false, 00:20:15.234 "zone_management": false, 00:20:15.234 "zone_append": false, 00:20:15.234 "compare": false, 00:20:15.234 "compare_and_write": false, 00:20:15.234 "abort": false, 00:20:15.234 "seek_hole": false, 00:20:15.234 "seek_data": false, 00:20:15.234 "copy": false, 00:20:15.234 "nvme_iov_md": false 00:20:15.234 }, 00:20:15.234 "driver_specific": { 00:20:15.234 "raid": { 00:20:15.234 "uuid": "f5901db2-03e9-4547-89ef-6e72c97ab7c0", 00:20:15.234 "strip_size_kb": 64, 00:20:15.234 "state": "online", 00:20:15.234 "raid_level": "raid5f", 00:20:15.234 "superblock": false, 00:20:15.234 "num_base_bdevs": 4, 00:20:15.234 "num_base_bdevs_discovered": 4, 00:20:15.234 "num_base_bdevs_operational": 4, 00:20:15.234 "base_bdevs_list": [ 00:20:15.234 { 00:20:15.234 "name": "BaseBdev1", 00:20:15.234 "uuid": "f38baf3c-4618-49e4-be56-f5571cac48ba", 00:20:15.234 "is_configured": true, 00:20:15.234 "data_offset": 0, 00:20:15.234 "data_size": 65536 00:20:15.234 }, 00:20:15.234 { 00:20:15.234 "name": "BaseBdev2", 00:20:15.234 "uuid": "e9339ffb-9759-4a7b-9372-da0deadf31b8", 00:20:15.234 "is_configured": true, 00:20:15.234 "data_offset": 0, 00:20:15.234 "data_size": 65536 00:20:15.234 }, 00:20:15.234 { 00:20:15.234 "name": "BaseBdev3", 00:20:15.234 "uuid": "e5a7e3aa-fa67-4b57-afe2-b5bdd985bbd7", 00:20:15.234 "is_configured": true, 00:20:15.234 "data_offset": 0, 00:20:15.235 "data_size": 65536 00:20:15.235 }, 00:20:15.235 { 00:20:15.235 "name": "BaseBdev4", 00:20:15.235 "uuid": "a971af16-3900-4c7e-9eeb-b2811cfc9d2e", 00:20:15.235 "is_configured": true, 00:20:15.235 "data_offset": 0, 00:20:15.235 "data_size": 65536 00:20:15.235 } 00:20:15.235 ] 00:20:15.235 } 00:20:15.235 } 00:20:15.235 }' 00:20:15.235 11:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:15.235 BaseBdev2 00:20:15.235 BaseBdev3 00:20:15.235 BaseBdev4' 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.235 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.494 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.494 [2024-11-20 11:32:23.311605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.753 "name": "Existed_Raid", 00:20:15.753 "uuid": "f5901db2-03e9-4547-89ef-6e72c97ab7c0", 00:20:15.753 "strip_size_kb": 64, 00:20:15.753 "state": "online", 00:20:15.753 "raid_level": "raid5f", 00:20:15.753 "superblock": false, 00:20:15.753 "num_base_bdevs": 4, 00:20:15.753 "num_base_bdevs_discovered": 3, 00:20:15.753 "num_base_bdevs_operational": 3, 00:20:15.753 "base_bdevs_list": [ 00:20:15.753 { 00:20:15.753 "name": null, 00:20:15.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.753 "is_configured": false, 00:20:15.753 "data_offset": 0, 00:20:15.753 "data_size": 65536 00:20:15.753 }, 00:20:15.753 { 00:20:15.753 "name": "BaseBdev2", 00:20:15.753 "uuid": "e9339ffb-9759-4a7b-9372-da0deadf31b8", 00:20:15.753 "is_configured": true, 00:20:15.753 "data_offset": 0, 00:20:15.753 "data_size": 65536 00:20:15.753 }, 00:20:15.753 { 00:20:15.753 "name": "BaseBdev3", 00:20:15.753 "uuid": "e5a7e3aa-fa67-4b57-afe2-b5bdd985bbd7", 00:20:15.753 "is_configured": true, 00:20:15.753 "data_offset": 0, 00:20:15.753 "data_size": 65536 00:20:15.753 }, 00:20:15.753 { 00:20:15.753 "name": "BaseBdev4", 00:20:15.753 "uuid": "a971af16-3900-4c7e-9eeb-b2811cfc9d2e", 00:20:15.753 "is_configured": true, 00:20:15.753 "data_offset": 0, 00:20:15.753 "data_size": 65536 00:20:15.753 } 00:20:15.753 ] 00:20:15.753 }' 00:20:15.753 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.754 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.321 11:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.321 [2024-11-20 11:32:23.971345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:16.321 [2024-11-20 11:32:23.971465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:16.321 [2024-11-20 11:32:24.048903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.321 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.321 [2024-11-20 11:32:24.113000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.580 [2024-11-20 11:32:24.260868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:16.580 [2024-11-20 11:32:24.260936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.580 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.839 BaseBdev2 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.839 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.839 [ 00:20:16.839 { 00:20:16.839 "name": "BaseBdev2", 00:20:16.839 "aliases": [ 00:20:16.839 "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0" 00:20:16.839 ], 00:20:16.839 "product_name": "Malloc disk", 00:20:16.840 "block_size": 512, 00:20:16.840 "num_blocks": 65536, 00:20:16.840 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:16.840 "assigned_rate_limits": { 00:20:16.840 "rw_ios_per_sec": 0, 00:20:16.840 "rw_mbytes_per_sec": 0, 00:20:16.840 "r_mbytes_per_sec": 0, 00:20:16.840 "w_mbytes_per_sec": 0 00:20:16.840 }, 00:20:16.840 "claimed": false, 00:20:16.840 "zoned": false, 00:20:16.840 "supported_io_types": { 00:20:16.840 "read": true, 00:20:16.840 "write": true, 00:20:16.840 "unmap": true, 00:20:16.840 "flush": true, 00:20:16.840 "reset": true, 00:20:16.840 "nvme_admin": false, 00:20:16.840 "nvme_io": false, 00:20:16.840 "nvme_io_md": false, 00:20:16.840 "write_zeroes": true, 00:20:16.840 "zcopy": true, 00:20:16.840 "get_zone_info": false, 00:20:16.840 "zone_management": false, 00:20:16.840 "zone_append": false, 00:20:16.840 "compare": false, 00:20:16.840 "compare_and_write": false, 00:20:16.840 "abort": true, 00:20:16.840 "seek_hole": false, 00:20:16.840 "seek_data": false, 00:20:16.840 "copy": true, 00:20:16.840 "nvme_iov_md": false 00:20:16.840 }, 00:20:16.840 "memory_domains": [ 00:20:16.840 { 00:20:16.840 "dma_device_id": "system", 00:20:16.840 "dma_device_type": 1 00:20:16.840 }, 00:20:16.840 { 00:20:16.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.840 "dma_device_type": 2 00:20:16.840 } 00:20:16.840 ], 00:20:16.840 "driver_specific": {} 00:20:16.840 } 00:20:16.840 ] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.840 BaseBdev3 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.840 [ 00:20:16.840 { 00:20:16.840 "name": "BaseBdev3", 00:20:16.840 "aliases": [ 00:20:16.840 "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9" 00:20:16.840 ], 00:20:16.840 "product_name": "Malloc disk", 00:20:16.840 "block_size": 512, 00:20:16.840 "num_blocks": 65536, 00:20:16.840 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:16.840 "assigned_rate_limits": { 00:20:16.840 "rw_ios_per_sec": 0, 00:20:16.840 "rw_mbytes_per_sec": 0, 00:20:16.840 "r_mbytes_per_sec": 0, 00:20:16.840 "w_mbytes_per_sec": 0 00:20:16.840 }, 00:20:16.840 "claimed": false, 00:20:16.840 "zoned": false, 00:20:16.840 "supported_io_types": { 00:20:16.840 "read": true, 00:20:16.840 "write": true, 00:20:16.840 "unmap": true, 00:20:16.840 "flush": true, 00:20:16.840 "reset": true, 00:20:16.840 "nvme_admin": false, 00:20:16.840 "nvme_io": false, 00:20:16.840 "nvme_io_md": false, 00:20:16.840 "write_zeroes": true, 00:20:16.840 "zcopy": true, 00:20:16.840 "get_zone_info": false, 00:20:16.840 "zone_management": false, 00:20:16.840 "zone_append": false, 00:20:16.840 "compare": false, 00:20:16.840 "compare_and_write": false, 00:20:16.840 "abort": true, 00:20:16.840 "seek_hole": false, 00:20:16.840 "seek_data": false, 00:20:16.840 "copy": true, 00:20:16.840 "nvme_iov_md": false 00:20:16.840 }, 00:20:16.840 "memory_domains": [ 00:20:16.840 { 00:20:16.840 "dma_device_id": "system", 00:20:16.840 "dma_device_type": 1 00:20:16.840 }, 00:20:16.840 { 00:20:16.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.840 "dma_device_type": 2 00:20:16.840 } 00:20:16.840 ], 00:20:16.840 "driver_specific": {} 00:20:16.840 } 00:20:16.840 ] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.840 BaseBdev4 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.840 [ 00:20:16.840 { 00:20:16.840 "name": "BaseBdev4", 00:20:16.840 "aliases": [ 00:20:16.840 "673b696d-6e73-4822-ad90-8825684a88ab" 00:20:16.840 ], 00:20:16.840 "product_name": "Malloc disk", 00:20:16.840 "block_size": 512, 00:20:16.840 "num_blocks": 65536, 00:20:16.840 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:16.840 "assigned_rate_limits": { 00:20:16.840 "rw_ios_per_sec": 0, 00:20:16.840 "rw_mbytes_per_sec": 0, 00:20:16.840 "r_mbytes_per_sec": 0, 00:20:16.840 "w_mbytes_per_sec": 0 00:20:16.840 }, 00:20:16.840 "claimed": false, 00:20:16.840 "zoned": false, 00:20:16.840 "supported_io_types": { 00:20:16.840 "read": true, 00:20:16.840 "write": true, 00:20:16.840 "unmap": true, 00:20:16.840 "flush": true, 00:20:16.840 "reset": true, 00:20:16.840 "nvme_admin": false, 00:20:16.840 "nvme_io": false, 00:20:16.840 "nvme_io_md": false, 00:20:16.840 "write_zeroes": true, 00:20:16.840 "zcopy": true, 00:20:16.840 "get_zone_info": false, 00:20:16.840 "zone_management": false, 00:20:16.840 "zone_append": false, 00:20:16.840 "compare": false, 00:20:16.840 "compare_and_write": false, 00:20:16.840 "abort": true, 00:20:16.840 "seek_hole": false, 00:20:16.840 "seek_data": false, 00:20:16.840 "copy": true, 00:20:16.840 "nvme_iov_md": false 00:20:16.840 }, 00:20:16.840 "memory_domains": [ 00:20:16.840 { 00:20:16.840 "dma_device_id": "system", 00:20:16.840 "dma_device_type": 1 00:20:16.840 }, 00:20:16.840 { 00:20:16.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.840 "dma_device_type": 2 00:20:16.840 } 00:20:16.840 ], 00:20:16.840 "driver_specific": {} 00:20:16.840 } 00:20:16.840 ] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.840 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.841 [2024-11-20 11:32:24.636608] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:16.841 [2024-11-20 11:32:24.636856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:16.841 [2024-11-20 11:32:24.636917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.841 [2024-11-20 11:32:24.639583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.841 [2024-11-20 11:32:24.639688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.841 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.100 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.100 "name": "Existed_Raid", 00:20:17.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.100 "strip_size_kb": 64, 00:20:17.100 "state": "configuring", 00:20:17.100 "raid_level": "raid5f", 00:20:17.100 "superblock": false, 00:20:17.100 "num_base_bdevs": 4, 00:20:17.100 "num_base_bdevs_discovered": 3, 00:20:17.100 "num_base_bdevs_operational": 4, 00:20:17.100 "base_bdevs_list": [ 00:20:17.100 { 00:20:17.100 "name": "BaseBdev1", 00:20:17.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.100 "is_configured": false, 00:20:17.100 "data_offset": 0, 00:20:17.100 "data_size": 0 00:20:17.100 }, 00:20:17.100 { 00:20:17.100 "name": "BaseBdev2", 00:20:17.100 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:17.100 "is_configured": true, 00:20:17.100 "data_offset": 0, 00:20:17.100 "data_size": 65536 00:20:17.100 }, 00:20:17.100 { 00:20:17.100 "name": "BaseBdev3", 00:20:17.100 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:17.100 "is_configured": true, 00:20:17.100 "data_offset": 0, 00:20:17.100 "data_size": 65536 00:20:17.100 }, 00:20:17.100 { 00:20:17.100 "name": "BaseBdev4", 00:20:17.100 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:17.100 "is_configured": true, 00:20:17.100 "data_offset": 0, 00:20:17.100 "data_size": 65536 00:20:17.100 } 00:20:17.100 ] 00:20:17.100 }' 00:20:17.100 11:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.100 11:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.372 [2024-11-20 11:32:25.172800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.372 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.631 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.631 "name": "Existed_Raid", 00:20:17.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.631 "strip_size_kb": 64, 00:20:17.631 "state": "configuring", 00:20:17.631 "raid_level": "raid5f", 00:20:17.631 "superblock": false, 00:20:17.631 "num_base_bdevs": 4, 00:20:17.631 "num_base_bdevs_discovered": 2, 00:20:17.631 "num_base_bdevs_operational": 4, 00:20:17.631 "base_bdevs_list": [ 00:20:17.631 { 00:20:17.631 "name": "BaseBdev1", 00:20:17.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.631 "is_configured": false, 00:20:17.631 "data_offset": 0, 00:20:17.631 "data_size": 0 00:20:17.631 }, 00:20:17.631 { 00:20:17.631 "name": null, 00:20:17.631 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:17.631 "is_configured": false, 00:20:17.631 "data_offset": 0, 00:20:17.631 "data_size": 65536 00:20:17.631 }, 00:20:17.631 { 00:20:17.631 "name": "BaseBdev3", 00:20:17.631 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:17.631 "is_configured": true, 00:20:17.631 "data_offset": 0, 00:20:17.631 "data_size": 65536 00:20:17.631 }, 00:20:17.631 { 00:20:17.631 "name": "BaseBdev4", 00:20:17.631 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:17.631 "is_configured": true, 00:20:17.631 "data_offset": 0, 00:20:17.631 "data_size": 65536 00:20:17.631 } 00:20:17.631 ] 00:20:17.631 }' 00:20:17.631 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.631 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.890 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.890 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:17.890 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.890 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.150 [2024-11-20 11:32:25.808484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:18.150 BaseBdev1 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.150 [ 00:20:18.150 { 00:20:18.150 "name": "BaseBdev1", 00:20:18.150 "aliases": [ 00:20:18.150 "e61945a0-7d53-4b9d-8a26-288e8ea6fa73" 00:20:18.150 ], 00:20:18.150 "product_name": "Malloc disk", 00:20:18.150 "block_size": 512, 00:20:18.150 "num_blocks": 65536, 00:20:18.150 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:18.150 "assigned_rate_limits": { 00:20:18.150 "rw_ios_per_sec": 0, 00:20:18.150 "rw_mbytes_per_sec": 0, 00:20:18.150 "r_mbytes_per_sec": 0, 00:20:18.150 "w_mbytes_per_sec": 0 00:20:18.150 }, 00:20:18.150 "claimed": true, 00:20:18.150 "claim_type": "exclusive_write", 00:20:18.150 "zoned": false, 00:20:18.150 "supported_io_types": { 00:20:18.150 "read": true, 00:20:18.150 "write": true, 00:20:18.150 "unmap": true, 00:20:18.150 "flush": true, 00:20:18.150 "reset": true, 00:20:18.150 "nvme_admin": false, 00:20:18.150 "nvme_io": false, 00:20:18.150 "nvme_io_md": false, 00:20:18.150 "write_zeroes": true, 00:20:18.150 "zcopy": true, 00:20:18.150 "get_zone_info": false, 00:20:18.150 "zone_management": false, 00:20:18.150 "zone_append": false, 00:20:18.150 "compare": false, 00:20:18.150 "compare_and_write": false, 00:20:18.150 "abort": true, 00:20:18.150 "seek_hole": false, 00:20:18.150 "seek_data": false, 00:20:18.150 "copy": true, 00:20:18.150 "nvme_iov_md": false 00:20:18.150 }, 00:20:18.150 "memory_domains": [ 00:20:18.150 { 00:20:18.150 "dma_device_id": "system", 00:20:18.150 "dma_device_type": 1 00:20:18.150 }, 00:20:18.150 { 00:20:18.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.150 "dma_device_type": 2 00:20:18.150 } 00:20:18.150 ], 00:20:18.150 "driver_specific": {} 00:20:18.150 } 00:20:18.150 ] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.150 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.150 "name": "Existed_Raid", 00:20:18.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.150 "strip_size_kb": 64, 00:20:18.150 "state": "configuring", 00:20:18.150 "raid_level": "raid5f", 00:20:18.150 "superblock": false, 00:20:18.150 "num_base_bdevs": 4, 00:20:18.150 "num_base_bdevs_discovered": 3, 00:20:18.150 "num_base_bdevs_operational": 4, 00:20:18.150 "base_bdevs_list": [ 00:20:18.150 { 00:20:18.150 "name": "BaseBdev1", 00:20:18.150 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:18.150 "is_configured": true, 00:20:18.150 "data_offset": 0, 00:20:18.150 "data_size": 65536 00:20:18.150 }, 00:20:18.150 { 00:20:18.150 "name": null, 00:20:18.150 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:18.150 "is_configured": false, 00:20:18.150 "data_offset": 0, 00:20:18.150 "data_size": 65536 00:20:18.150 }, 00:20:18.150 { 00:20:18.150 "name": "BaseBdev3", 00:20:18.150 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:18.150 "is_configured": true, 00:20:18.150 "data_offset": 0, 00:20:18.150 "data_size": 65536 00:20:18.150 }, 00:20:18.150 { 00:20:18.150 "name": "BaseBdev4", 00:20:18.150 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:18.150 "is_configured": true, 00:20:18.150 "data_offset": 0, 00:20:18.151 "data_size": 65536 00:20:18.151 } 00:20:18.151 ] 00:20:18.151 }' 00:20:18.151 11:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.151 11:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:18.718 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.719 [2024-11-20 11:32:26.404753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.719 "name": "Existed_Raid", 00:20:18.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.719 "strip_size_kb": 64, 00:20:18.719 "state": "configuring", 00:20:18.719 "raid_level": "raid5f", 00:20:18.719 "superblock": false, 00:20:18.719 "num_base_bdevs": 4, 00:20:18.719 "num_base_bdevs_discovered": 2, 00:20:18.719 "num_base_bdevs_operational": 4, 00:20:18.719 "base_bdevs_list": [ 00:20:18.719 { 00:20:18.719 "name": "BaseBdev1", 00:20:18.719 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:18.719 "is_configured": true, 00:20:18.719 "data_offset": 0, 00:20:18.719 "data_size": 65536 00:20:18.719 }, 00:20:18.719 { 00:20:18.719 "name": null, 00:20:18.719 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:18.719 "is_configured": false, 00:20:18.719 "data_offset": 0, 00:20:18.719 "data_size": 65536 00:20:18.719 }, 00:20:18.719 { 00:20:18.719 "name": null, 00:20:18.719 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:18.719 "is_configured": false, 00:20:18.719 "data_offset": 0, 00:20:18.719 "data_size": 65536 00:20:18.719 }, 00:20:18.719 { 00:20:18.719 "name": "BaseBdev4", 00:20:18.719 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:18.719 "is_configured": true, 00:20:18.719 "data_offset": 0, 00:20:18.719 "data_size": 65536 00:20:18.719 } 00:20:18.719 ] 00:20:18.719 }' 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.719 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.287 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:19.287 11:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.287 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.287 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.287 11:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.287 [2024-11-20 11:32:27.013055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.287 "name": "Existed_Raid", 00:20:19.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.287 "strip_size_kb": 64, 00:20:19.287 "state": "configuring", 00:20:19.287 "raid_level": "raid5f", 00:20:19.287 "superblock": false, 00:20:19.287 "num_base_bdevs": 4, 00:20:19.287 "num_base_bdevs_discovered": 3, 00:20:19.287 "num_base_bdevs_operational": 4, 00:20:19.287 "base_bdevs_list": [ 00:20:19.287 { 00:20:19.287 "name": "BaseBdev1", 00:20:19.287 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:19.287 "is_configured": true, 00:20:19.287 "data_offset": 0, 00:20:19.287 "data_size": 65536 00:20:19.287 }, 00:20:19.287 { 00:20:19.287 "name": null, 00:20:19.287 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:19.287 "is_configured": false, 00:20:19.287 "data_offset": 0, 00:20:19.287 "data_size": 65536 00:20:19.287 }, 00:20:19.287 { 00:20:19.287 "name": "BaseBdev3", 00:20:19.287 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:19.287 "is_configured": true, 00:20:19.287 "data_offset": 0, 00:20:19.287 "data_size": 65536 00:20:19.287 }, 00:20:19.287 { 00:20:19.287 "name": "BaseBdev4", 00:20:19.287 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:19.287 "is_configured": true, 00:20:19.287 "data_offset": 0, 00:20:19.287 "data_size": 65536 00:20:19.287 } 00:20:19.287 ] 00:20:19.287 }' 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.287 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.854 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.854 [2024-11-20 11:32:27.625212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.113 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.113 "name": "Existed_Raid", 00:20:20.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.113 "strip_size_kb": 64, 00:20:20.113 "state": "configuring", 00:20:20.113 "raid_level": "raid5f", 00:20:20.113 "superblock": false, 00:20:20.114 "num_base_bdevs": 4, 00:20:20.114 "num_base_bdevs_discovered": 2, 00:20:20.114 "num_base_bdevs_operational": 4, 00:20:20.114 "base_bdevs_list": [ 00:20:20.114 { 00:20:20.114 "name": null, 00:20:20.114 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:20.114 "is_configured": false, 00:20:20.114 "data_offset": 0, 00:20:20.114 "data_size": 65536 00:20:20.114 }, 00:20:20.114 { 00:20:20.114 "name": null, 00:20:20.114 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:20.114 "is_configured": false, 00:20:20.114 "data_offset": 0, 00:20:20.114 "data_size": 65536 00:20:20.114 }, 00:20:20.114 { 00:20:20.114 "name": "BaseBdev3", 00:20:20.114 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:20.114 "is_configured": true, 00:20:20.114 "data_offset": 0, 00:20:20.114 "data_size": 65536 00:20:20.114 }, 00:20:20.114 { 00:20:20.114 "name": "BaseBdev4", 00:20:20.114 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:20.114 "is_configured": true, 00:20:20.114 "data_offset": 0, 00:20:20.114 "data_size": 65536 00:20:20.114 } 00:20:20.114 ] 00:20:20.114 }' 00:20:20.114 11:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.114 11:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.682 [2024-11-20 11:32:28.291775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.682 "name": "Existed_Raid", 00:20:20.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.682 "strip_size_kb": 64, 00:20:20.682 "state": "configuring", 00:20:20.682 "raid_level": "raid5f", 00:20:20.682 "superblock": false, 00:20:20.682 "num_base_bdevs": 4, 00:20:20.682 "num_base_bdevs_discovered": 3, 00:20:20.682 "num_base_bdevs_operational": 4, 00:20:20.682 "base_bdevs_list": [ 00:20:20.682 { 00:20:20.682 "name": null, 00:20:20.682 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:20.682 "is_configured": false, 00:20:20.682 "data_offset": 0, 00:20:20.682 "data_size": 65536 00:20:20.682 }, 00:20:20.682 { 00:20:20.682 "name": "BaseBdev2", 00:20:20.682 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:20.682 "is_configured": true, 00:20:20.682 "data_offset": 0, 00:20:20.682 "data_size": 65536 00:20:20.682 }, 00:20:20.682 { 00:20:20.682 "name": "BaseBdev3", 00:20:20.682 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:20.682 "is_configured": true, 00:20:20.682 "data_offset": 0, 00:20:20.682 "data_size": 65536 00:20:20.682 }, 00:20:20.682 { 00:20:20.682 "name": "BaseBdev4", 00:20:20.682 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:20.682 "is_configured": true, 00:20:20.682 "data_offset": 0, 00:20:20.682 "data_size": 65536 00:20:20.682 } 00:20:20.682 ] 00:20:20.682 }' 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.682 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e61945a0-7d53-4b9d-8a26-288e8ea6fa73 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.252 [2024-11-20 11:32:28.950885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:21.252 [2024-11-20 11:32:28.950957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:21.252 [2024-11-20 11:32:28.950971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:21.252 [2024-11-20 11:32:28.951315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:21.252 [2024-11-20 11:32:28.957837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:21.252 [2024-11-20 11:32:28.958038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:21.252 [2024-11-20 11:32:28.958422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.252 NewBaseBdev 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.252 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.252 [ 00:20:21.252 { 00:20:21.252 "name": "NewBaseBdev", 00:20:21.252 "aliases": [ 00:20:21.252 "e61945a0-7d53-4b9d-8a26-288e8ea6fa73" 00:20:21.252 ], 00:20:21.252 "product_name": "Malloc disk", 00:20:21.252 "block_size": 512, 00:20:21.252 "num_blocks": 65536, 00:20:21.252 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:21.252 "assigned_rate_limits": { 00:20:21.252 "rw_ios_per_sec": 0, 00:20:21.252 "rw_mbytes_per_sec": 0, 00:20:21.252 "r_mbytes_per_sec": 0, 00:20:21.252 "w_mbytes_per_sec": 0 00:20:21.253 }, 00:20:21.253 "claimed": true, 00:20:21.253 "claim_type": "exclusive_write", 00:20:21.253 "zoned": false, 00:20:21.253 "supported_io_types": { 00:20:21.253 "read": true, 00:20:21.253 "write": true, 00:20:21.253 "unmap": true, 00:20:21.253 "flush": true, 00:20:21.253 "reset": true, 00:20:21.253 "nvme_admin": false, 00:20:21.253 "nvme_io": false, 00:20:21.253 "nvme_io_md": false, 00:20:21.253 "write_zeroes": true, 00:20:21.253 "zcopy": true, 00:20:21.253 "get_zone_info": false, 00:20:21.253 "zone_management": false, 00:20:21.253 "zone_append": false, 00:20:21.253 "compare": false, 00:20:21.253 "compare_and_write": false, 00:20:21.253 "abort": true, 00:20:21.253 "seek_hole": false, 00:20:21.253 "seek_data": false, 00:20:21.253 "copy": true, 00:20:21.253 "nvme_iov_md": false 00:20:21.253 }, 00:20:21.253 "memory_domains": [ 00:20:21.253 { 00:20:21.253 "dma_device_id": "system", 00:20:21.253 "dma_device_type": 1 00:20:21.253 }, 00:20:21.253 { 00:20:21.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.253 "dma_device_type": 2 00:20:21.253 } 00:20:21.253 ], 00:20:21.253 "driver_specific": {} 00:20:21.253 } 00:20:21.253 ] 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.253 11:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.253 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.253 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.253 "name": "Existed_Raid", 00:20:21.253 "uuid": "5e82b9c1-71c8-4a30-b057-e94962efa3a1", 00:20:21.253 "strip_size_kb": 64, 00:20:21.253 "state": "online", 00:20:21.253 "raid_level": "raid5f", 00:20:21.253 "superblock": false, 00:20:21.253 "num_base_bdevs": 4, 00:20:21.253 "num_base_bdevs_discovered": 4, 00:20:21.253 "num_base_bdevs_operational": 4, 00:20:21.253 "base_bdevs_list": [ 00:20:21.253 { 00:20:21.253 "name": "NewBaseBdev", 00:20:21.253 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:21.254 "is_configured": true, 00:20:21.254 "data_offset": 0, 00:20:21.254 "data_size": 65536 00:20:21.254 }, 00:20:21.254 { 00:20:21.254 "name": "BaseBdev2", 00:20:21.254 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:21.254 "is_configured": true, 00:20:21.254 "data_offset": 0, 00:20:21.254 "data_size": 65536 00:20:21.254 }, 00:20:21.254 { 00:20:21.254 "name": "BaseBdev3", 00:20:21.254 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:21.254 "is_configured": true, 00:20:21.254 "data_offset": 0, 00:20:21.254 "data_size": 65536 00:20:21.254 }, 00:20:21.254 { 00:20:21.254 "name": "BaseBdev4", 00:20:21.254 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:21.254 "is_configured": true, 00:20:21.254 "data_offset": 0, 00:20:21.254 "data_size": 65536 00:20:21.254 } 00:20:21.254 ] 00:20:21.254 }' 00:20:21.254 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.254 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.822 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:21.822 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:21.822 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:21.822 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:21.822 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:21.823 [2024-11-20 11:32:29.526523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:21.823 "name": "Existed_Raid", 00:20:21.823 "aliases": [ 00:20:21.823 "5e82b9c1-71c8-4a30-b057-e94962efa3a1" 00:20:21.823 ], 00:20:21.823 "product_name": "Raid Volume", 00:20:21.823 "block_size": 512, 00:20:21.823 "num_blocks": 196608, 00:20:21.823 "uuid": "5e82b9c1-71c8-4a30-b057-e94962efa3a1", 00:20:21.823 "assigned_rate_limits": { 00:20:21.823 "rw_ios_per_sec": 0, 00:20:21.823 "rw_mbytes_per_sec": 0, 00:20:21.823 "r_mbytes_per_sec": 0, 00:20:21.823 "w_mbytes_per_sec": 0 00:20:21.823 }, 00:20:21.823 "claimed": false, 00:20:21.823 "zoned": false, 00:20:21.823 "supported_io_types": { 00:20:21.823 "read": true, 00:20:21.823 "write": true, 00:20:21.823 "unmap": false, 00:20:21.823 "flush": false, 00:20:21.823 "reset": true, 00:20:21.823 "nvme_admin": false, 00:20:21.823 "nvme_io": false, 00:20:21.823 "nvme_io_md": false, 00:20:21.823 "write_zeroes": true, 00:20:21.823 "zcopy": false, 00:20:21.823 "get_zone_info": false, 00:20:21.823 "zone_management": false, 00:20:21.823 "zone_append": false, 00:20:21.823 "compare": false, 00:20:21.823 "compare_and_write": false, 00:20:21.823 "abort": false, 00:20:21.823 "seek_hole": false, 00:20:21.823 "seek_data": false, 00:20:21.823 "copy": false, 00:20:21.823 "nvme_iov_md": false 00:20:21.823 }, 00:20:21.823 "driver_specific": { 00:20:21.823 "raid": { 00:20:21.823 "uuid": "5e82b9c1-71c8-4a30-b057-e94962efa3a1", 00:20:21.823 "strip_size_kb": 64, 00:20:21.823 "state": "online", 00:20:21.823 "raid_level": "raid5f", 00:20:21.823 "superblock": false, 00:20:21.823 "num_base_bdevs": 4, 00:20:21.823 "num_base_bdevs_discovered": 4, 00:20:21.823 "num_base_bdevs_operational": 4, 00:20:21.823 "base_bdevs_list": [ 00:20:21.823 { 00:20:21.823 "name": "NewBaseBdev", 00:20:21.823 "uuid": "e61945a0-7d53-4b9d-8a26-288e8ea6fa73", 00:20:21.823 "is_configured": true, 00:20:21.823 "data_offset": 0, 00:20:21.823 "data_size": 65536 00:20:21.823 }, 00:20:21.823 { 00:20:21.823 "name": "BaseBdev2", 00:20:21.823 "uuid": "a7b73f74-9d7a-46fb-b0a3-b574c23bf2d0", 00:20:21.823 "is_configured": true, 00:20:21.823 "data_offset": 0, 00:20:21.823 "data_size": 65536 00:20:21.823 }, 00:20:21.823 { 00:20:21.823 "name": "BaseBdev3", 00:20:21.823 "uuid": "8f2eaf4e-88bf-4e1d-96ff-2e3314fc2fa9", 00:20:21.823 "is_configured": true, 00:20:21.823 "data_offset": 0, 00:20:21.823 "data_size": 65536 00:20:21.823 }, 00:20:21.823 { 00:20:21.823 "name": "BaseBdev4", 00:20:21.823 "uuid": "673b696d-6e73-4822-ad90-8825684a88ab", 00:20:21.823 "is_configured": true, 00:20:21.823 "data_offset": 0, 00:20:21.823 "data_size": 65536 00:20:21.823 } 00:20:21.823 ] 00:20:21.823 } 00:20:21.823 } 00:20:21.823 }' 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:21.823 BaseBdev2 00:20:21.823 BaseBdev3 00:20:21.823 BaseBdev4' 00:20:21.823 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.082 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.342 [2024-11-20 11:32:29.926290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.342 [2024-11-20 11:32:29.926330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.342 [2024-11-20 11:32:29.926417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.342 [2024-11-20 11:32:29.926925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.342 [2024-11-20 11:32:29.926954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83063 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83063 ']' 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83063 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83063 00:20:22.342 killing process with pid 83063 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83063' 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83063 00:20:22.342 [2024-11-20 11:32:29.967410] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:22.342 11:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83063 00:20:22.601 [2024-11-20 11:32:30.315237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.539 11:32:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:23.539 00:20:23.539 real 0m13.087s 00:20:23.539 user 0m21.755s 00:20:23.539 sys 0m1.857s 00:20:23.539 11:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.539 11:32:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.539 ************************************ 00:20:23.539 END TEST raid5f_state_function_test 00:20:23.539 ************************************ 00:20:23.799 11:32:31 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:20:23.799 11:32:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:23.799 11:32:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.799 11:32:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:23.799 ************************************ 00:20:23.799 START TEST raid5f_state_function_test_sb 00:20:23.799 ************************************ 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:23.799 Process raid pid: 83746 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83746 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83746' 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83746 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83746 ']' 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.799 11:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.799 [2024-11-20 11:32:31.512637] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:20:23.799 [2024-11-20 11:32:31.513178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.058 [2024-11-20 11:32:31.692209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.058 [2024-11-20 11:32:31.826293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.317 [2024-11-20 11:32:32.035926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.317 [2024-11-20 11:32:32.035991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.883 [2024-11-20 11:32:32.504006] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:24.883 [2024-11-20 11:32:32.504084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:24.883 [2024-11-20 11:32:32.504108] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.883 [2024-11-20 11:32:32.504136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.883 [2024-11-20 11:32:32.504148] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:24.883 [2024-11-20 11:32:32.504163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:24.883 [2024-11-20 11:32:32.504173] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:24.883 [2024-11-20 11:32:32.504187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.883 "name": "Existed_Raid", 00:20:24.883 "uuid": "93673d9f-149f-4f11-b711-a17458e141b9", 00:20:24.883 "strip_size_kb": 64, 00:20:24.883 "state": "configuring", 00:20:24.883 "raid_level": "raid5f", 00:20:24.883 "superblock": true, 00:20:24.883 "num_base_bdevs": 4, 00:20:24.883 "num_base_bdevs_discovered": 0, 00:20:24.883 "num_base_bdevs_operational": 4, 00:20:24.883 "base_bdevs_list": [ 00:20:24.883 { 00:20:24.883 "name": "BaseBdev1", 00:20:24.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.883 "is_configured": false, 00:20:24.883 "data_offset": 0, 00:20:24.883 "data_size": 0 00:20:24.883 }, 00:20:24.883 { 00:20:24.883 "name": "BaseBdev2", 00:20:24.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.883 "is_configured": false, 00:20:24.883 "data_offset": 0, 00:20:24.883 "data_size": 0 00:20:24.883 }, 00:20:24.883 { 00:20:24.883 "name": "BaseBdev3", 00:20:24.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.883 "is_configured": false, 00:20:24.883 "data_offset": 0, 00:20:24.883 "data_size": 0 00:20:24.883 }, 00:20:24.883 { 00:20:24.883 "name": "BaseBdev4", 00:20:24.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.883 "is_configured": false, 00:20:24.883 "data_offset": 0, 00:20:24.883 "data_size": 0 00:20:24.883 } 00:20:24.883 ] 00:20:24.883 }' 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.883 11:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.454 [2024-11-20 11:32:33.040049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:25.454 [2024-11-20 11:32:33.040098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.454 [2024-11-20 11:32:33.052024] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:25.454 [2024-11-20 11:32:33.052205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:25.454 [2024-11-20 11:32:33.052340] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:25.454 [2024-11-20 11:32:33.052403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:25.454 [2024-11-20 11:32:33.052512] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:25.454 [2024-11-20 11:32:33.052572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:25.454 [2024-11-20 11:32:33.052730] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:25.454 [2024-11-20 11:32:33.052791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.454 [2024-11-20 11:32:33.098195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.454 BaseBdev1 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:25.454 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.455 [ 00:20:25.455 { 00:20:25.455 "name": "BaseBdev1", 00:20:25.455 "aliases": [ 00:20:25.455 "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f" 00:20:25.455 ], 00:20:25.455 "product_name": "Malloc disk", 00:20:25.455 "block_size": 512, 00:20:25.455 "num_blocks": 65536, 00:20:25.455 "uuid": "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f", 00:20:25.455 "assigned_rate_limits": { 00:20:25.455 "rw_ios_per_sec": 0, 00:20:25.455 "rw_mbytes_per_sec": 0, 00:20:25.455 "r_mbytes_per_sec": 0, 00:20:25.455 "w_mbytes_per_sec": 0 00:20:25.455 }, 00:20:25.455 "claimed": true, 00:20:25.455 "claim_type": "exclusive_write", 00:20:25.455 "zoned": false, 00:20:25.455 "supported_io_types": { 00:20:25.455 "read": true, 00:20:25.455 "write": true, 00:20:25.455 "unmap": true, 00:20:25.455 "flush": true, 00:20:25.455 "reset": true, 00:20:25.455 "nvme_admin": false, 00:20:25.455 "nvme_io": false, 00:20:25.455 "nvme_io_md": false, 00:20:25.455 "write_zeroes": true, 00:20:25.455 "zcopy": true, 00:20:25.455 "get_zone_info": false, 00:20:25.455 "zone_management": false, 00:20:25.455 "zone_append": false, 00:20:25.455 "compare": false, 00:20:25.455 "compare_and_write": false, 00:20:25.455 "abort": true, 00:20:25.455 "seek_hole": false, 00:20:25.455 "seek_data": false, 00:20:25.455 "copy": true, 00:20:25.455 "nvme_iov_md": false 00:20:25.455 }, 00:20:25.455 "memory_domains": [ 00:20:25.455 { 00:20:25.455 "dma_device_id": "system", 00:20:25.455 "dma_device_type": 1 00:20:25.455 }, 00:20:25.455 { 00:20:25.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.455 "dma_device_type": 2 00:20:25.455 } 00:20:25.455 ], 00:20:25.455 "driver_specific": {} 00:20:25.455 } 00:20:25.455 ] 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.455 "name": "Existed_Raid", 00:20:25.455 "uuid": "0cc053fa-3ef5-4eac-9130-5635f6a76d36", 00:20:25.455 "strip_size_kb": 64, 00:20:25.455 "state": "configuring", 00:20:25.455 "raid_level": "raid5f", 00:20:25.455 "superblock": true, 00:20:25.455 "num_base_bdevs": 4, 00:20:25.455 "num_base_bdevs_discovered": 1, 00:20:25.455 "num_base_bdevs_operational": 4, 00:20:25.455 "base_bdevs_list": [ 00:20:25.455 { 00:20:25.455 "name": "BaseBdev1", 00:20:25.455 "uuid": "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f", 00:20:25.455 "is_configured": true, 00:20:25.455 "data_offset": 2048, 00:20:25.455 "data_size": 63488 00:20:25.455 }, 00:20:25.455 { 00:20:25.455 "name": "BaseBdev2", 00:20:25.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.455 "is_configured": false, 00:20:25.455 "data_offset": 0, 00:20:25.455 "data_size": 0 00:20:25.455 }, 00:20:25.455 { 00:20:25.455 "name": "BaseBdev3", 00:20:25.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.455 "is_configured": false, 00:20:25.455 "data_offset": 0, 00:20:25.455 "data_size": 0 00:20:25.455 }, 00:20:25.455 { 00:20:25.455 "name": "BaseBdev4", 00:20:25.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.455 "is_configured": false, 00:20:25.455 "data_offset": 0, 00:20:25.455 "data_size": 0 00:20:25.455 } 00:20:25.455 ] 00:20:25.455 }' 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.455 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.021 [2024-11-20 11:32:33.646400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.021 [2024-11-20 11:32:33.646481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.021 [2024-11-20 11:32:33.654470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.021 [2024-11-20 11:32:33.657194] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:26.021 [2024-11-20 11:32:33.657387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:26.021 [2024-11-20 11:32:33.657526] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:26.021 [2024-11-20 11:32:33.657591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:26.021 [2024-11-20 11:32:33.657748] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:26.021 [2024-11-20 11:32:33.657837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.021 "name": "Existed_Raid", 00:20:26.021 "uuid": "63bcbe3d-5f94-46d2-a657-6293254eb460", 00:20:26.021 "strip_size_kb": 64, 00:20:26.021 "state": "configuring", 00:20:26.021 "raid_level": "raid5f", 00:20:26.021 "superblock": true, 00:20:26.021 "num_base_bdevs": 4, 00:20:26.021 "num_base_bdevs_discovered": 1, 00:20:26.021 "num_base_bdevs_operational": 4, 00:20:26.021 "base_bdevs_list": [ 00:20:26.021 { 00:20:26.021 "name": "BaseBdev1", 00:20:26.021 "uuid": "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f", 00:20:26.021 "is_configured": true, 00:20:26.021 "data_offset": 2048, 00:20:26.021 "data_size": 63488 00:20:26.021 }, 00:20:26.021 { 00:20:26.021 "name": "BaseBdev2", 00:20:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.021 "is_configured": false, 00:20:26.021 "data_offset": 0, 00:20:26.021 "data_size": 0 00:20:26.021 }, 00:20:26.021 { 00:20:26.021 "name": "BaseBdev3", 00:20:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.021 "is_configured": false, 00:20:26.021 "data_offset": 0, 00:20:26.021 "data_size": 0 00:20:26.021 }, 00:20:26.021 { 00:20:26.021 "name": "BaseBdev4", 00:20:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.021 "is_configured": false, 00:20:26.021 "data_offset": 0, 00:20:26.021 "data_size": 0 00:20:26.021 } 00:20:26.021 ] 00:20:26.021 }' 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.021 11:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.588 [2024-11-20 11:32:34.181030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:26.588 BaseBdev2 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.588 [ 00:20:26.588 { 00:20:26.588 "name": "BaseBdev2", 00:20:26.588 "aliases": [ 00:20:26.588 "bde85c8d-10e4-47f4-9d43-a562348831b0" 00:20:26.588 ], 00:20:26.588 "product_name": "Malloc disk", 00:20:26.588 "block_size": 512, 00:20:26.588 "num_blocks": 65536, 00:20:26.588 "uuid": "bde85c8d-10e4-47f4-9d43-a562348831b0", 00:20:26.588 "assigned_rate_limits": { 00:20:26.588 "rw_ios_per_sec": 0, 00:20:26.588 "rw_mbytes_per_sec": 0, 00:20:26.588 "r_mbytes_per_sec": 0, 00:20:26.588 "w_mbytes_per_sec": 0 00:20:26.588 }, 00:20:26.588 "claimed": true, 00:20:26.588 "claim_type": "exclusive_write", 00:20:26.588 "zoned": false, 00:20:26.588 "supported_io_types": { 00:20:26.588 "read": true, 00:20:26.588 "write": true, 00:20:26.588 "unmap": true, 00:20:26.588 "flush": true, 00:20:26.588 "reset": true, 00:20:26.588 "nvme_admin": false, 00:20:26.588 "nvme_io": false, 00:20:26.588 "nvme_io_md": false, 00:20:26.588 "write_zeroes": true, 00:20:26.588 "zcopy": true, 00:20:26.588 "get_zone_info": false, 00:20:26.588 "zone_management": false, 00:20:26.588 "zone_append": false, 00:20:26.588 "compare": false, 00:20:26.588 "compare_and_write": false, 00:20:26.588 "abort": true, 00:20:26.588 "seek_hole": false, 00:20:26.588 "seek_data": false, 00:20:26.588 "copy": true, 00:20:26.588 "nvme_iov_md": false 00:20:26.588 }, 00:20:26.588 "memory_domains": [ 00:20:26.588 { 00:20:26.588 "dma_device_id": "system", 00:20:26.588 "dma_device_type": 1 00:20:26.588 }, 00:20:26.588 { 00:20:26.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.588 "dma_device_type": 2 00:20:26.588 } 00:20:26.588 ], 00:20:26.588 "driver_specific": {} 00:20:26.588 } 00:20:26.588 ] 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.588 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.589 "name": "Existed_Raid", 00:20:26.589 "uuid": "63bcbe3d-5f94-46d2-a657-6293254eb460", 00:20:26.589 "strip_size_kb": 64, 00:20:26.589 "state": "configuring", 00:20:26.589 "raid_level": "raid5f", 00:20:26.589 "superblock": true, 00:20:26.589 "num_base_bdevs": 4, 00:20:26.589 "num_base_bdevs_discovered": 2, 00:20:26.589 "num_base_bdevs_operational": 4, 00:20:26.589 "base_bdevs_list": [ 00:20:26.589 { 00:20:26.589 "name": "BaseBdev1", 00:20:26.589 "uuid": "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f", 00:20:26.589 "is_configured": true, 00:20:26.589 "data_offset": 2048, 00:20:26.589 "data_size": 63488 00:20:26.589 }, 00:20:26.589 { 00:20:26.589 "name": "BaseBdev2", 00:20:26.589 "uuid": "bde85c8d-10e4-47f4-9d43-a562348831b0", 00:20:26.589 "is_configured": true, 00:20:26.589 "data_offset": 2048, 00:20:26.589 "data_size": 63488 00:20:26.589 }, 00:20:26.589 { 00:20:26.589 "name": "BaseBdev3", 00:20:26.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.589 "is_configured": false, 00:20:26.589 "data_offset": 0, 00:20:26.589 "data_size": 0 00:20:26.589 }, 00:20:26.589 { 00:20:26.589 "name": "BaseBdev4", 00:20:26.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.589 "is_configured": false, 00:20:26.589 "data_offset": 0, 00:20:26.589 "data_size": 0 00:20:26.589 } 00:20:26.589 ] 00:20:26.589 }' 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.589 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.156 [2024-11-20 11:32:34.777919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.156 BaseBdev3 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.156 [ 00:20:27.156 { 00:20:27.156 "name": "BaseBdev3", 00:20:27.156 "aliases": [ 00:20:27.156 "46b6a9aa-90b6-4904-9836-5ee531c721bf" 00:20:27.156 ], 00:20:27.156 "product_name": "Malloc disk", 00:20:27.156 "block_size": 512, 00:20:27.156 "num_blocks": 65536, 00:20:27.156 "uuid": "46b6a9aa-90b6-4904-9836-5ee531c721bf", 00:20:27.156 "assigned_rate_limits": { 00:20:27.156 "rw_ios_per_sec": 0, 00:20:27.156 "rw_mbytes_per_sec": 0, 00:20:27.156 "r_mbytes_per_sec": 0, 00:20:27.156 "w_mbytes_per_sec": 0 00:20:27.156 }, 00:20:27.156 "claimed": true, 00:20:27.156 "claim_type": "exclusive_write", 00:20:27.156 "zoned": false, 00:20:27.156 "supported_io_types": { 00:20:27.156 "read": true, 00:20:27.156 "write": true, 00:20:27.156 "unmap": true, 00:20:27.156 "flush": true, 00:20:27.156 "reset": true, 00:20:27.156 "nvme_admin": false, 00:20:27.156 "nvme_io": false, 00:20:27.156 "nvme_io_md": false, 00:20:27.156 "write_zeroes": true, 00:20:27.156 "zcopy": true, 00:20:27.156 "get_zone_info": false, 00:20:27.156 "zone_management": false, 00:20:27.156 "zone_append": false, 00:20:27.156 "compare": false, 00:20:27.156 "compare_and_write": false, 00:20:27.156 "abort": true, 00:20:27.156 "seek_hole": false, 00:20:27.156 "seek_data": false, 00:20:27.156 "copy": true, 00:20:27.156 "nvme_iov_md": false 00:20:27.156 }, 00:20:27.156 "memory_domains": [ 00:20:27.156 { 00:20:27.156 "dma_device_id": "system", 00:20:27.156 "dma_device_type": 1 00:20:27.156 }, 00:20:27.156 { 00:20:27.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.156 "dma_device_type": 2 00:20:27.156 } 00:20:27.156 ], 00:20:27.156 "driver_specific": {} 00:20:27.156 } 00:20:27.156 ] 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.156 "name": "Existed_Raid", 00:20:27.156 "uuid": "63bcbe3d-5f94-46d2-a657-6293254eb460", 00:20:27.156 "strip_size_kb": 64, 00:20:27.156 "state": "configuring", 00:20:27.156 "raid_level": "raid5f", 00:20:27.156 "superblock": true, 00:20:27.156 "num_base_bdevs": 4, 00:20:27.156 "num_base_bdevs_discovered": 3, 00:20:27.156 "num_base_bdevs_operational": 4, 00:20:27.156 "base_bdevs_list": [ 00:20:27.156 { 00:20:27.156 "name": "BaseBdev1", 00:20:27.156 "uuid": "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f", 00:20:27.156 "is_configured": true, 00:20:27.156 "data_offset": 2048, 00:20:27.156 "data_size": 63488 00:20:27.156 }, 00:20:27.156 { 00:20:27.156 "name": "BaseBdev2", 00:20:27.156 "uuid": "bde85c8d-10e4-47f4-9d43-a562348831b0", 00:20:27.156 "is_configured": true, 00:20:27.156 "data_offset": 2048, 00:20:27.156 "data_size": 63488 00:20:27.156 }, 00:20:27.156 { 00:20:27.156 "name": "BaseBdev3", 00:20:27.156 "uuid": "46b6a9aa-90b6-4904-9836-5ee531c721bf", 00:20:27.156 "is_configured": true, 00:20:27.156 "data_offset": 2048, 00:20:27.156 "data_size": 63488 00:20:27.156 }, 00:20:27.156 { 00:20:27.156 "name": "BaseBdev4", 00:20:27.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.156 "is_configured": false, 00:20:27.156 "data_offset": 0, 00:20:27.156 "data_size": 0 00:20:27.156 } 00:20:27.156 ] 00:20:27.156 }' 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.156 11:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.725 [2024-11-20 11:32:35.340950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:27.725 [2024-11-20 11:32:35.341338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:27.725 [2024-11-20 11:32:35.341360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:27.725 BaseBdev4 00:20:27.725 [2024-11-20 11:32:35.341711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.725 [2024-11-20 11:32:35.349180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:27.725 [2024-11-20 11:32:35.349212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:27.725 [2024-11-20 11:32:35.349527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.725 [ 00:20:27.725 { 00:20:27.725 "name": "BaseBdev4", 00:20:27.725 "aliases": [ 00:20:27.725 "e005bccc-54ac-4890-b583-592e8321d0af" 00:20:27.725 ], 00:20:27.725 "product_name": "Malloc disk", 00:20:27.725 "block_size": 512, 00:20:27.725 "num_blocks": 65536, 00:20:27.725 "uuid": "e005bccc-54ac-4890-b583-592e8321d0af", 00:20:27.725 "assigned_rate_limits": { 00:20:27.725 "rw_ios_per_sec": 0, 00:20:27.725 "rw_mbytes_per_sec": 0, 00:20:27.725 "r_mbytes_per_sec": 0, 00:20:27.725 "w_mbytes_per_sec": 0 00:20:27.725 }, 00:20:27.725 "claimed": true, 00:20:27.725 "claim_type": "exclusive_write", 00:20:27.725 "zoned": false, 00:20:27.725 "supported_io_types": { 00:20:27.725 "read": true, 00:20:27.725 "write": true, 00:20:27.725 "unmap": true, 00:20:27.725 "flush": true, 00:20:27.725 "reset": true, 00:20:27.725 "nvme_admin": false, 00:20:27.725 "nvme_io": false, 00:20:27.725 "nvme_io_md": false, 00:20:27.725 "write_zeroes": true, 00:20:27.725 "zcopy": true, 00:20:27.725 "get_zone_info": false, 00:20:27.725 "zone_management": false, 00:20:27.725 "zone_append": false, 00:20:27.725 "compare": false, 00:20:27.725 "compare_and_write": false, 00:20:27.725 "abort": true, 00:20:27.725 "seek_hole": false, 00:20:27.725 "seek_data": false, 00:20:27.725 "copy": true, 00:20:27.725 "nvme_iov_md": false 00:20:27.725 }, 00:20:27.725 "memory_domains": [ 00:20:27.725 { 00:20:27.725 "dma_device_id": "system", 00:20:27.725 "dma_device_type": 1 00:20:27.725 }, 00:20:27.725 { 00:20:27.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.725 "dma_device_type": 2 00:20:27.725 } 00:20:27.725 ], 00:20:27.725 "driver_specific": {} 00:20:27.725 } 00:20:27.725 ] 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.725 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.725 "name": "Existed_Raid", 00:20:27.725 "uuid": "63bcbe3d-5f94-46d2-a657-6293254eb460", 00:20:27.725 "strip_size_kb": 64, 00:20:27.725 "state": "online", 00:20:27.725 "raid_level": "raid5f", 00:20:27.725 "superblock": true, 00:20:27.725 "num_base_bdevs": 4, 00:20:27.725 "num_base_bdevs_discovered": 4, 00:20:27.725 "num_base_bdevs_operational": 4, 00:20:27.725 "base_bdevs_list": [ 00:20:27.725 { 00:20:27.725 "name": "BaseBdev1", 00:20:27.725 "uuid": "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f", 00:20:27.725 "is_configured": true, 00:20:27.725 "data_offset": 2048, 00:20:27.725 "data_size": 63488 00:20:27.725 }, 00:20:27.725 { 00:20:27.725 "name": "BaseBdev2", 00:20:27.725 "uuid": "bde85c8d-10e4-47f4-9d43-a562348831b0", 00:20:27.725 "is_configured": true, 00:20:27.725 "data_offset": 2048, 00:20:27.725 "data_size": 63488 00:20:27.725 }, 00:20:27.725 { 00:20:27.725 "name": "BaseBdev3", 00:20:27.725 "uuid": "46b6a9aa-90b6-4904-9836-5ee531c721bf", 00:20:27.725 "is_configured": true, 00:20:27.725 "data_offset": 2048, 00:20:27.725 "data_size": 63488 00:20:27.725 }, 00:20:27.725 { 00:20:27.725 "name": "BaseBdev4", 00:20:27.725 "uuid": "e005bccc-54ac-4890-b583-592e8321d0af", 00:20:27.725 "is_configured": true, 00:20:27.725 "data_offset": 2048, 00:20:27.726 "data_size": 63488 00:20:27.726 } 00:20:27.726 ] 00:20:27.726 }' 00:20:27.726 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.726 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:28.292 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.293 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:28.293 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.293 [2024-11-20 11:32:35.917388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.293 11:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.293 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:28.293 "name": "Existed_Raid", 00:20:28.293 "aliases": [ 00:20:28.293 "63bcbe3d-5f94-46d2-a657-6293254eb460" 00:20:28.293 ], 00:20:28.293 "product_name": "Raid Volume", 00:20:28.293 "block_size": 512, 00:20:28.293 "num_blocks": 190464, 00:20:28.293 "uuid": "63bcbe3d-5f94-46d2-a657-6293254eb460", 00:20:28.293 "assigned_rate_limits": { 00:20:28.293 "rw_ios_per_sec": 0, 00:20:28.293 "rw_mbytes_per_sec": 0, 00:20:28.293 "r_mbytes_per_sec": 0, 00:20:28.293 "w_mbytes_per_sec": 0 00:20:28.293 }, 00:20:28.293 "claimed": false, 00:20:28.293 "zoned": false, 00:20:28.293 "supported_io_types": { 00:20:28.293 "read": true, 00:20:28.293 "write": true, 00:20:28.293 "unmap": false, 00:20:28.293 "flush": false, 00:20:28.293 "reset": true, 00:20:28.293 "nvme_admin": false, 00:20:28.293 "nvme_io": false, 00:20:28.293 "nvme_io_md": false, 00:20:28.293 "write_zeroes": true, 00:20:28.293 "zcopy": false, 00:20:28.293 "get_zone_info": false, 00:20:28.293 "zone_management": false, 00:20:28.293 "zone_append": false, 00:20:28.293 "compare": false, 00:20:28.293 "compare_and_write": false, 00:20:28.293 "abort": false, 00:20:28.293 "seek_hole": false, 00:20:28.293 "seek_data": false, 00:20:28.293 "copy": false, 00:20:28.293 "nvme_iov_md": false 00:20:28.293 }, 00:20:28.293 "driver_specific": { 00:20:28.293 "raid": { 00:20:28.293 "uuid": "63bcbe3d-5f94-46d2-a657-6293254eb460", 00:20:28.293 "strip_size_kb": 64, 00:20:28.293 "state": "online", 00:20:28.293 "raid_level": "raid5f", 00:20:28.293 "superblock": true, 00:20:28.293 "num_base_bdevs": 4, 00:20:28.293 "num_base_bdevs_discovered": 4, 00:20:28.293 "num_base_bdevs_operational": 4, 00:20:28.293 "base_bdevs_list": [ 00:20:28.293 { 00:20:28.293 "name": "BaseBdev1", 00:20:28.293 "uuid": "eda7bd42-7e5a-4fcb-8f17-e7b9331cff0f", 00:20:28.293 "is_configured": true, 00:20:28.293 "data_offset": 2048, 00:20:28.293 "data_size": 63488 00:20:28.293 }, 00:20:28.293 { 00:20:28.293 "name": "BaseBdev2", 00:20:28.293 "uuid": "bde85c8d-10e4-47f4-9d43-a562348831b0", 00:20:28.293 "is_configured": true, 00:20:28.293 "data_offset": 2048, 00:20:28.293 "data_size": 63488 00:20:28.293 }, 00:20:28.293 { 00:20:28.293 "name": "BaseBdev3", 00:20:28.293 "uuid": "46b6a9aa-90b6-4904-9836-5ee531c721bf", 00:20:28.293 "is_configured": true, 00:20:28.293 "data_offset": 2048, 00:20:28.293 "data_size": 63488 00:20:28.293 }, 00:20:28.293 { 00:20:28.293 "name": "BaseBdev4", 00:20:28.293 "uuid": "e005bccc-54ac-4890-b583-592e8321d0af", 00:20:28.293 "is_configured": true, 00:20:28.293 "data_offset": 2048, 00:20:28.293 "data_size": 63488 00:20:28.293 } 00:20:28.293 ] 00:20:28.293 } 00:20:28.293 } 00:20:28.293 }' 00:20:28.293 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:28.293 11:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:28.293 BaseBdev2 00:20:28.293 BaseBdev3 00:20:28.293 BaseBdev4' 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.293 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.552 [2024-11-20 11:32:36.281318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.552 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.810 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.810 "name": "Existed_Raid", 00:20:28.810 "uuid": "63bcbe3d-5f94-46d2-a657-6293254eb460", 00:20:28.810 "strip_size_kb": 64, 00:20:28.810 "state": "online", 00:20:28.810 "raid_level": "raid5f", 00:20:28.810 "superblock": true, 00:20:28.810 "num_base_bdevs": 4, 00:20:28.810 "num_base_bdevs_discovered": 3, 00:20:28.810 "num_base_bdevs_operational": 3, 00:20:28.810 "base_bdevs_list": [ 00:20:28.810 { 00:20:28.810 "name": null, 00:20:28.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.810 "is_configured": false, 00:20:28.810 "data_offset": 0, 00:20:28.810 "data_size": 63488 00:20:28.810 }, 00:20:28.810 { 00:20:28.810 "name": "BaseBdev2", 00:20:28.810 "uuid": "bde85c8d-10e4-47f4-9d43-a562348831b0", 00:20:28.810 "is_configured": true, 00:20:28.810 "data_offset": 2048, 00:20:28.810 "data_size": 63488 00:20:28.810 }, 00:20:28.810 { 00:20:28.810 "name": "BaseBdev3", 00:20:28.810 "uuid": "46b6a9aa-90b6-4904-9836-5ee531c721bf", 00:20:28.810 "is_configured": true, 00:20:28.810 "data_offset": 2048, 00:20:28.810 "data_size": 63488 00:20:28.810 }, 00:20:28.810 { 00:20:28.810 "name": "BaseBdev4", 00:20:28.810 "uuid": "e005bccc-54ac-4890-b583-592e8321d0af", 00:20:28.810 "is_configured": true, 00:20:28.810 "data_offset": 2048, 00:20:28.810 "data_size": 63488 00:20:28.810 } 00:20:28.810 ] 00:20:28.810 }' 00:20:28.810 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.810 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.069 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:29.069 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:29.069 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.069 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.069 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.069 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:29.069 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.328 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:29.328 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:29.328 11:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:29.328 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.328 11:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 [2024-11-20 11:32:36.960848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:29.328 [2024-11-20 11:32:36.961216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.328 [2024-11-20 11:32:37.048480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.328 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 [2024-11-20 11:32:37.100531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:29.587 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.588 [2024-11-20 11:32:37.246407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:29.588 [2024-11-20 11:32:37.246493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.588 BaseBdev2 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:29.588 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.848 [ 00:20:29.848 { 00:20:29.848 "name": "BaseBdev2", 00:20:29.848 "aliases": [ 00:20:29.848 "d2e8f823-2ee1-4450-8889-12f9cdd33dd4" 00:20:29.848 ], 00:20:29.848 "product_name": "Malloc disk", 00:20:29.848 "block_size": 512, 00:20:29.848 "num_blocks": 65536, 00:20:29.848 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:29.848 "assigned_rate_limits": { 00:20:29.848 "rw_ios_per_sec": 0, 00:20:29.848 "rw_mbytes_per_sec": 0, 00:20:29.848 "r_mbytes_per_sec": 0, 00:20:29.848 "w_mbytes_per_sec": 0 00:20:29.848 }, 00:20:29.848 "claimed": false, 00:20:29.848 "zoned": false, 00:20:29.848 "supported_io_types": { 00:20:29.848 "read": true, 00:20:29.848 "write": true, 00:20:29.848 "unmap": true, 00:20:29.848 "flush": true, 00:20:29.848 "reset": true, 00:20:29.848 "nvme_admin": false, 00:20:29.848 "nvme_io": false, 00:20:29.848 "nvme_io_md": false, 00:20:29.848 "write_zeroes": true, 00:20:29.848 "zcopy": true, 00:20:29.848 "get_zone_info": false, 00:20:29.848 "zone_management": false, 00:20:29.848 "zone_append": false, 00:20:29.848 "compare": false, 00:20:29.848 "compare_and_write": false, 00:20:29.848 "abort": true, 00:20:29.848 "seek_hole": false, 00:20:29.848 "seek_data": false, 00:20:29.848 "copy": true, 00:20:29.848 "nvme_iov_md": false 00:20:29.848 }, 00:20:29.848 "memory_domains": [ 00:20:29.848 { 00:20:29.848 "dma_device_id": "system", 00:20:29.848 "dma_device_type": 1 00:20:29.848 }, 00:20:29.848 { 00:20:29.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.848 "dma_device_type": 2 00:20:29.848 } 00:20:29.848 ], 00:20:29.848 "driver_specific": {} 00:20:29.848 } 00:20:29.848 ] 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.848 BaseBdev3 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.848 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.848 [ 00:20:29.848 { 00:20:29.848 "name": "BaseBdev3", 00:20:29.848 "aliases": [ 00:20:29.848 "556b3871-5a1a-4102-aa3e-681164a246c0" 00:20:29.848 ], 00:20:29.848 "product_name": "Malloc disk", 00:20:29.848 "block_size": 512, 00:20:29.848 "num_blocks": 65536, 00:20:29.848 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:29.848 "assigned_rate_limits": { 00:20:29.848 "rw_ios_per_sec": 0, 00:20:29.848 "rw_mbytes_per_sec": 0, 00:20:29.848 "r_mbytes_per_sec": 0, 00:20:29.848 "w_mbytes_per_sec": 0 00:20:29.848 }, 00:20:29.848 "claimed": false, 00:20:29.848 "zoned": false, 00:20:29.848 "supported_io_types": { 00:20:29.848 "read": true, 00:20:29.848 "write": true, 00:20:29.848 "unmap": true, 00:20:29.848 "flush": true, 00:20:29.848 "reset": true, 00:20:29.848 "nvme_admin": false, 00:20:29.848 "nvme_io": false, 00:20:29.848 "nvme_io_md": false, 00:20:29.848 "write_zeroes": true, 00:20:29.848 "zcopy": true, 00:20:29.848 "get_zone_info": false, 00:20:29.848 "zone_management": false, 00:20:29.848 "zone_append": false, 00:20:29.848 "compare": false, 00:20:29.848 "compare_and_write": false, 00:20:29.848 "abort": true, 00:20:29.848 "seek_hole": false, 00:20:29.848 "seek_data": false, 00:20:29.848 "copy": true, 00:20:29.849 "nvme_iov_md": false 00:20:29.849 }, 00:20:29.849 "memory_domains": [ 00:20:29.849 { 00:20:29.849 "dma_device_id": "system", 00:20:29.849 "dma_device_type": 1 00:20:29.849 }, 00:20:29.849 { 00:20:29.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.849 "dma_device_type": 2 00:20:29.849 } 00:20:29.849 ], 00:20:29.849 "driver_specific": {} 00:20:29.849 } 00:20:29.849 ] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 BaseBdev4 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 [ 00:20:29.849 { 00:20:29.849 "name": "BaseBdev4", 00:20:29.849 "aliases": [ 00:20:29.849 "974e8861-6874-4c98-83a5-d2bde2cd2e68" 00:20:29.849 ], 00:20:29.849 "product_name": "Malloc disk", 00:20:29.849 "block_size": 512, 00:20:29.849 "num_blocks": 65536, 00:20:29.849 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:29.849 "assigned_rate_limits": { 00:20:29.849 "rw_ios_per_sec": 0, 00:20:29.849 "rw_mbytes_per_sec": 0, 00:20:29.849 "r_mbytes_per_sec": 0, 00:20:29.849 "w_mbytes_per_sec": 0 00:20:29.849 }, 00:20:29.849 "claimed": false, 00:20:29.849 "zoned": false, 00:20:29.849 "supported_io_types": { 00:20:29.849 "read": true, 00:20:29.849 "write": true, 00:20:29.849 "unmap": true, 00:20:29.849 "flush": true, 00:20:29.849 "reset": true, 00:20:29.849 "nvme_admin": false, 00:20:29.849 "nvme_io": false, 00:20:29.849 "nvme_io_md": false, 00:20:29.849 "write_zeroes": true, 00:20:29.849 "zcopy": true, 00:20:29.849 "get_zone_info": false, 00:20:29.849 "zone_management": false, 00:20:29.849 "zone_append": false, 00:20:29.849 "compare": false, 00:20:29.849 "compare_and_write": false, 00:20:29.849 "abort": true, 00:20:29.849 "seek_hole": false, 00:20:29.849 "seek_data": false, 00:20:29.849 "copy": true, 00:20:29.849 "nvme_iov_md": false 00:20:29.849 }, 00:20:29.849 "memory_domains": [ 00:20:29.849 { 00:20:29.849 "dma_device_id": "system", 00:20:29.849 "dma_device_type": 1 00:20:29.849 }, 00:20:29.849 { 00:20:29.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.849 "dma_device_type": 2 00:20:29.849 } 00:20:29.849 ], 00:20:29.849 "driver_specific": {} 00:20:29.849 } 00:20:29.849 ] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 [2024-11-20 11:32:37.623421] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:29.849 [2024-11-20 11:32:37.623609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:29.849 [2024-11-20 11:32:37.623672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.849 [2024-11-20 11:32:37.626071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.849 [2024-11-20 11:32:37.626170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.849 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.849 "name": "Existed_Raid", 00:20:29.849 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:29.849 "strip_size_kb": 64, 00:20:29.849 "state": "configuring", 00:20:29.849 "raid_level": "raid5f", 00:20:29.849 "superblock": true, 00:20:29.849 "num_base_bdevs": 4, 00:20:29.849 "num_base_bdevs_discovered": 3, 00:20:29.850 "num_base_bdevs_operational": 4, 00:20:29.850 "base_bdevs_list": [ 00:20:29.850 { 00:20:29.850 "name": "BaseBdev1", 00:20:29.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.850 "is_configured": false, 00:20:29.850 "data_offset": 0, 00:20:29.850 "data_size": 0 00:20:29.850 }, 00:20:29.850 { 00:20:29.850 "name": "BaseBdev2", 00:20:29.850 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:29.850 "is_configured": true, 00:20:29.850 "data_offset": 2048, 00:20:29.850 "data_size": 63488 00:20:29.850 }, 00:20:29.850 { 00:20:29.850 "name": "BaseBdev3", 00:20:29.850 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:29.850 "is_configured": true, 00:20:29.850 "data_offset": 2048, 00:20:29.850 "data_size": 63488 00:20:29.850 }, 00:20:29.850 { 00:20:29.850 "name": "BaseBdev4", 00:20:29.850 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:29.850 "is_configured": true, 00:20:29.850 "data_offset": 2048, 00:20:29.850 "data_size": 63488 00:20:29.850 } 00:20:29.850 ] 00:20:29.850 }' 00:20:29.850 11:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.850 11:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.417 [2024-11-20 11:32:38.151571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.417 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.417 "name": "Existed_Raid", 00:20:30.417 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:30.417 "strip_size_kb": 64, 00:20:30.417 "state": "configuring", 00:20:30.417 "raid_level": "raid5f", 00:20:30.417 "superblock": true, 00:20:30.417 "num_base_bdevs": 4, 00:20:30.417 "num_base_bdevs_discovered": 2, 00:20:30.417 "num_base_bdevs_operational": 4, 00:20:30.417 "base_bdevs_list": [ 00:20:30.417 { 00:20:30.417 "name": "BaseBdev1", 00:20:30.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.417 "is_configured": false, 00:20:30.417 "data_offset": 0, 00:20:30.417 "data_size": 0 00:20:30.417 }, 00:20:30.417 { 00:20:30.417 "name": null, 00:20:30.417 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:30.417 "is_configured": false, 00:20:30.417 "data_offset": 0, 00:20:30.417 "data_size": 63488 00:20:30.417 }, 00:20:30.417 { 00:20:30.417 "name": "BaseBdev3", 00:20:30.417 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:30.417 "is_configured": true, 00:20:30.417 "data_offset": 2048, 00:20:30.417 "data_size": 63488 00:20:30.417 }, 00:20:30.417 { 00:20:30.417 "name": "BaseBdev4", 00:20:30.417 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:30.418 "is_configured": true, 00:20:30.418 "data_offset": 2048, 00:20:30.418 "data_size": 63488 00:20:30.418 } 00:20:30.418 ] 00:20:30.418 }' 00:20:30.418 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.418 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.984 [2024-11-20 11:32:38.762640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.984 BaseBdev1 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.984 [ 00:20:30.984 { 00:20:30.984 "name": "BaseBdev1", 00:20:30.984 "aliases": [ 00:20:30.984 "2b38cc99-7c9f-44cb-9fb1-296e43a532d2" 00:20:30.984 ], 00:20:30.984 "product_name": "Malloc disk", 00:20:30.984 "block_size": 512, 00:20:30.984 "num_blocks": 65536, 00:20:30.984 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:30.984 "assigned_rate_limits": { 00:20:30.984 "rw_ios_per_sec": 0, 00:20:30.984 "rw_mbytes_per_sec": 0, 00:20:30.984 "r_mbytes_per_sec": 0, 00:20:30.984 "w_mbytes_per_sec": 0 00:20:30.984 }, 00:20:30.984 "claimed": true, 00:20:30.984 "claim_type": "exclusive_write", 00:20:30.984 "zoned": false, 00:20:30.984 "supported_io_types": { 00:20:30.984 "read": true, 00:20:30.984 "write": true, 00:20:30.984 "unmap": true, 00:20:30.984 "flush": true, 00:20:30.984 "reset": true, 00:20:30.984 "nvme_admin": false, 00:20:30.984 "nvme_io": false, 00:20:30.984 "nvme_io_md": false, 00:20:30.984 "write_zeroes": true, 00:20:30.984 "zcopy": true, 00:20:30.984 "get_zone_info": false, 00:20:30.984 "zone_management": false, 00:20:30.984 "zone_append": false, 00:20:30.984 "compare": false, 00:20:30.984 "compare_and_write": false, 00:20:30.984 "abort": true, 00:20:30.984 "seek_hole": false, 00:20:30.984 "seek_data": false, 00:20:30.984 "copy": true, 00:20:30.984 "nvme_iov_md": false 00:20:30.984 }, 00:20:30.984 "memory_domains": [ 00:20:30.984 { 00:20:30.984 "dma_device_id": "system", 00:20:30.984 "dma_device_type": 1 00:20:30.984 }, 00:20:30.984 { 00:20:30.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.984 "dma_device_type": 2 00:20:30.984 } 00:20:30.984 ], 00:20:30.984 "driver_specific": {} 00:20:30.984 } 00:20:30.984 ] 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.984 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.985 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.243 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.243 "name": "Existed_Raid", 00:20:31.243 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:31.243 "strip_size_kb": 64, 00:20:31.243 "state": "configuring", 00:20:31.243 "raid_level": "raid5f", 00:20:31.243 "superblock": true, 00:20:31.243 "num_base_bdevs": 4, 00:20:31.243 "num_base_bdevs_discovered": 3, 00:20:31.243 "num_base_bdevs_operational": 4, 00:20:31.243 "base_bdevs_list": [ 00:20:31.243 { 00:20:31.243 "name": "BaseBdev1", 00:20:31.243 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:31.243 "is_configured": true, 00:20:31.243 "data_offset": 2048, 00:20:31.243 "data_size": 63488 00:20:31.243 }, 00:20:31.243 { 00:20:31.243 "name": null, 00:20:31.243 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:31.243 "is_configured": false, 00:20:31.243 "data_offset": 0, 00:20:31.243 "data_size": 63488 00:20:31.243 }, 00:20:31.243 { 00:20:31.243 "name": "BaseBdev3", 00:20:31.243 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:31.243 "is_configured": true, 00:20:31.243 "data_offset": 2048, 00:20:31.243 "data_size": 63488 00:20:31.243 }, 00:20:31.243 { 00:20:31.243 "name": "BaseBdev4", 00:20:31.243 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:31.243 "is_configured": true, 00:20:31.243 "data_offset": 2048, 00:20:31.243 "data_size": 63488 00:20:31.243 } 00:20:31.243 ] 00:20:31.243 }' 00:20:31.243 11:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.243 11:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.508 [2024-11-20 11:32:39.338875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.508 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.786 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.786 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.786 "name": "Existed_Raid", 00:20:31.786 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:31.786 "strip_size_kb": 64, 00:20:31.786 "state": "configuring", 00:20:31.786 "raid_level": "raid5f", 00:20:31.786 "superblock": true, 00:20:31.786 "num_base_bdevs": 4, 00:20:31.786 "num_base_bdevs_discovered": 2, 00:20:31.786 "num_base_bdevs_operational": 4, 00:20:31.786 "base_bdevs_list": [ 00:20:31.786 { 00:20:31.786 "name": "BaseBdev1", 00:20:31.786 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:31.786 "is_configured": true, 00:20:31.786 "data_offset": 2048, 00:20:31.786 "data_size": 63488 00:20:31.786 }, 00:20:31.786 { 00:20:31.786 "name": null, 00:20:31.786 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:31.786 "is_configured": false, 00:20:31.786 "data_offset": 0, 00:20:31.786 "data_size": 63488 00:20:31.786 }, 00:20:31.786 { 00:20:31.786 "name": null, 00:20:31.786 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:31.786 "is_configured": false, 00:20:31.786 "data_offset": 0, 00:20:31.786 "data_size": 63488 00:20:31.786 }, 00:20:31.786 { 00:20:31.786 "name": "BaseBdev4", 00:20:31.786 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:31.786 "is_configured": true, 00:20:31.786 "data_offset": 2048, 00:20:31.786 "data_size": 63488 00:20:31.786 } 00:20:31.786 ] 00:20:31.786 }' 00:20:31.786 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.786 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.109 [2024-11-20 11:32:39.939091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.109 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.368 11:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.368 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.368 "name": "Existed_Raid", 00:20:32.368 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:32.368 "strip_size_kb": 64, 00:20:32.368 "state": "configuring", 00:20:32.368 "raid_level": "raid5f", 00:20:32.368 "superblock": true, 00:20:32.368 "num_base_bdevs": 4, 00:20:32.368 "num_base_bdevs_discovered": 3, 00:20:32.368 "num_base_bdevs_operational": 4, 00:20:32.368 "base_bdevs_list": [ 00:20:32.368 { 00:20:32.368 "name": "BaseBdev1", 00:20:32.368 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:32.368 "is_configured": true, 00:20:32.368 "data_offset": 2048, 00:20:32.368 "data_size": 63488 00:20:32.368 }, 00:20:32.368 { 00:20:32.368 "name": null, 00:20:32.368 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:32.368 "is_configured": false, 00:20:32.368 "data_offset": 0, 00:20:32.368 "data_size": 63488 00:20:32.368 }, 00:20:32.368 { 00:20:32.368 "name": "BaseBdev3", 00:20:32.368 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:32.368 "is_configured": true, 00:20:32.368 "data_offset": 2048, 00:20:32.368 "data_size": 63488 00:20:32.368 }, 00:20:32.368 { 00:20:32.368 "name": "BaseBdev4", 00:20:32.368 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:32.368 "is_configured": true, 00:20:32.368 "data_offset": 2048, 00:20:32.368 "data_size": 63488 00:20:32.368 } 00:20:32.368 ] 00:20:32.368 }' 00:20:32.368 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.368 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.626 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.626 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.626 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.626 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:32.626 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 [2024-11-20 11:32:40.499304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.885 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.885 "name": "Existed_Raid", 00:20:32.885 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:32.885 "strip_size_kb": 64, 00:20:32.885 "state": "configuring", 00:20:32.885 "raid_level": "raid5f", 00:20:32.885 "superblock": true, 00:20:32.885 "num_base_bdevs": 4, 00:20:32.885 "num_base_bdevs_discovered": 2, 00:20:32.885 "num_base_bdevs_operational": 4, 00:20:32.885 "base_bdevs_list": [ 00:20:32.885 { 00:20:32.885 "name": null, 00:20:32.885 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:32.885 "is_configured": false, 00:20:32.885 "data_offset": 0, 00:20:32.885 "data_size": 63488 00:20:32.885 }, 00:20:32.885 { 00:20:32.885 "name": null, 00:20:32.885 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:32.885 "is_configured": false, 00:20:32.885 "data_offset": 0, 00:20:32.885 "data_size": 63488 00:20:32.885 }, 00:20:32.885 { 00:20:32.885 "name": "BaseBdev3", 00:20:32.885 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:32.885 "is_configured": true, 00:20:32.885 "data_offset": 2048, 00:20:32.885 "data_size": 63488 00:20:32.885 }, 00:20:32.885 { 00:20:32.885 "name": "BaseBdev4", 00:20:32.885 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:32.885 "is_configured": true, 00:20:32.885 "data_offset": 2048, 00:20:32.885 "data_size": 63488 00:20:32.885 } 00:20:32.885 ] 00:20:32.885 }' 00:20:32.885 11:32:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.885 11:32:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.451 [2024-11-20 11:32:41.138187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.451 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.452 "name": "Existed_Raid", 00:20:33.452 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:33.452 "strip_size_kb": 64, 00:20:33.452 "state": "configuring", 00:20:33.452 "raid_level": "raid5f", 00:20:33.452 "superblock": true, 00:20:33.452 "num_base_bdevs": 4, 00:20:33.452 "num_base_bdevs_discovered": 3, 00:20:33.452 "num_base_bdevs_operational": 4, 00:20:33.452 "base_bdevs_list": [ 00:20:33.452 { 00:20:33.452 "name": null, 00:20:33.452 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:33.452 "is_configured": false, 00:20:33.452 "data_offset": 0, 00:20:33.452 "data_size": 63488 00:20:33.452 }, 00:20:33.452 { 00:20:33.452 "name": "BaseBdev2", 00:20:33.452 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:33.452 "is_configured": true, 00:20:33.452 "data_offset": 2048, 00:20:33.452 "data_size": 63488 00:20:33.452 }, 00:20:33.452 { 00:20:33.452 "name": "BaseBdev3", 00:20:33.452 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:33.452 "is_configured": true, 00:20:33.452 "data_offset": 2048, 00:20:33.452 "data_size": 63488 00:20:33.452 }, 00:20:33.452 { 00:20:33.452 "name": "BaseBdev4", 00:20:33.452 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:33.452 "is_configured": true, 00:20:33.452 "data_offset": 2048, 00:20:33.452 "data_size": 63488 00:20:33.452 } 00:20:33.452 ] 00:20:33.452 }' 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.452 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b38cc99-7c9f-44cb-9fb1-296e43a532d2 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 [2024-11-20 11:32:41.764782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:34.020 [2024-11-20 11:32:41.765091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:34.020 [2024-11-20 11:32:41.765125] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:34.020 NewBaseBdev 00:20:34.020 [2024-11-20 11:32:41.765467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 [2024-11-20 11:32:41.772023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:34.020 [2024-11-20 11:32:41.772068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:34.020 [2024-11-20 11:32:41.772363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.020 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 [ 00:20:34.020 { 00:20:34.020 "name": "NewBaseBdev", 00:20:34.020 "aliases": [ 00:20:34.020 "2b38cc99-7c9f-44cb-9fb1-296e43a532d2" 00:20:34.020 ], 00:20:34.020 "product_name": "Malloc disk", 00:20:34.020 "block_size": 512, 00:20:34.020 "num_blocks": 65536, 00:20:34.020 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:34.020 "assigned_rate_limits": { 00:20:34.020 "rw_ios_per_sec": 0, 00:20:34.020 "rw_mbytes_per_sec": 0, 00:20:34.020 "r_mbytes_per_sec": 0, 00:20:34.020 "w_mbytes_per_sec": 0 00:20:34.020 }, 00:20:34.020 "claimed": true, 00:20:34.020 "claim_type": "exclusive_write", 00:20:34.020 "zoned": false, 00:20:34.020 "supported_io_types": { 00:20:34.020 "read": true, 00:20:34.020 "write": true, 00:20:34.020 "unmap": true, 00:20:34.020 "flush": true, 00:20:34.020 "reset": true, 00:20:34.020 "nvme_admin": false, 00:20:34.020 "nvme_io": false, 00:20:34.020 "nvme_io_md": false, 00:20:34.020 "write_zeroes": true, 00:20:34.021 "zcopy": true, 00:20:34.021 "get_zone_info": false, 00:20:34.021 "zone_management": false, 00:20:34.021 "zone_append": false, 00:20:34.021 "compare": false, 00:20:34.021 "compare_and_write": false, 00:20:34.021 "abort": true, 00:20:34.021 "seek_hole": false, 00:20:34.021 "seek_data": false, 00:20:34.021 "copy": true, 00:20:34.021 "nvme_iov_md": false 00:20:34.021 }, 00:20:34.021 "memory_domains": [ 00:20:34.021 { 00:20:34.021 "dma_device_id": "system", 00:20:34.021 "dma_device_type": 1 00:20:34.021 }, 00:20:34.021 { 00:20:34.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.021 "dma_device_type": 2 00:20:34.021 } 00:20:34.021 ], 00:20:34.021 "driver_specific": {} 00:20:34.021 } 00:20:34.021 ] 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.021 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.280 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.280 "name": "Existed_Raid", 00:20:34.280 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:34.280 "strip_size_kb": 64, 00:20:34.280 "state": "online", 00:20:34.280 "raid_level": "raid5f", 00:20:34.280 "superblock": true, 00:20:34.280 "num_base_bdevs": 4, 00:20:34.280 "num_base_bdevs_discovered": 4, 00:20:34.280 "num_base_bdevs_operational": 4, 00:20:34.280 "base_bdevs_list": [ 00:20:34.280 { 00:20:34.280 "name": "NewBaseBdev", 00:20:34.280 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:34.280 "is_configured": true, 00:20:34.280 "data_offset": 2048, 00:20:34.280 "data_size": 63488 00:20:34.280 }, 00:20:34.280 { 00:20:34.280 "name": "BaseBdev2", 00:20:34.280 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:34.280 "is_configured": true, 00:20:34.280 "data_offset": 2048, 00:20:34.280 "data_size": 63488 00:20:34.280 }, 00:20:34.280 { 00:20:34.280 "name": "BaseBdev3", 00:20:34.280 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:34.280 "is_configured": true, 00:20:34.280 "data_offset": 2048, 00:20:34.280 "data_size": 63488 00:20:34.280 }, 00:20:34.280 { 00:20:34.280 "name": "BaseBdev4", 00:20:34.280 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:34.280 "is_configured": true, 00:20:34.280 "data_offset": 2048, 00:20:34.280 "data_size": 63488 00:20:34.280 } 00:20:34.280 ] 00:20:34.280 }' 00:20:34.280 11:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.280 11:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.538 [2024-11-20 11:32:42.300578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.538 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:34.538 "name": "Existed_Raid", 00:20:34.538 "aliases": [ 00:20:34.538 "2d311994-d7bd-4ffd-bfa5-ce1826c6832a" 00:20:34.538 ], 00:20:34.538 "product_name": "Raid Volume", 00:20:34.538 "block_size": 512, 00:20:34.538 "num_blocks": 190464, 00:20:34.538 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:34.538 "assigned_rate_limits": { 00:20:34.538 "rw_ios_per_sec": 0, 00:20:34.538 "rw_mbytes_per_sec": 0, 00:20:34.538 "r_mbytes_per_sec": 0, 00:20:34.538 "w_mbytes_per_sec": 0 00:20:34.538 }, 00:20:34.538 "claimed": false, 00:20:34.538 "zoned": false, 00:20:34.538 "supported_io_types": { 00:20:34.538 "read": true, 00:20:34.538 "write": true, 00:20:34.538 "unmap": false, 00:20:34.538 "flush": false, 00:20:34.538 "reset": true, 00:20:34.538 "nvme_admin": false, 00:20:34.538 "nvme_io": false, 00:20:34.538 "nvme_io_md": false, 00:20:34.538 "write_zeroes": true, 00:20:34.538 "zcopy": false, 00:20:34.538 "get_zone_info": false, 00:20:34.538 "zone_management": false, 00:20:34.538 "zone_append": false, 00:20:34.538 "compare": false, 00:20:34.538 "compare_and_write": false, 00:20:34.538 "abort": false, 00:20:34.538 "seek_hole": false, 00:20:34.538 "seek_data": false, 00:20:34.538 "copy": false, 00:20:34.538 "nvme_iov_md": false 00:20:34.538 }, 00:20:34.538 "driver_specific": { 00:20:34.538 "raid": { 00:20:34.538 "uuid": "2d311994-d7bd-4ffd-bfa5-ce1826c6832a", 00:20:34.538 "strip_size_kb": 64, 00:20:34.538 "state": "online", 00:20:34.538 "raid_level": "raid5f", 00:20:34.538 "superblock": true, 00:20:34.538 "num_base_bdevs": 4, 00:20:34.538 "num_base_bdevs_discovered": 4, 00:20:34.538 "num_base_bdevs_operational": 4, 00:20:34.538 "base_bdevs_list": [ 00:20:34.538 { 00:20:34.538 "name": "NewBaseBdev", 00:20:34.538 "uuid": "2b38cc99-7c9f-44cb-9fb1-296e43a532d2", 00:20:34.538 "is_configured": true, 00:20:34.538 "data_offset": 2048, 00:20:34.538 "data_size": 63488 00:20:34.538 }, 00:20:34.538 { 00:20:34.538 "name": "BaseBdev2", 00:20:34.539 "uuid": "d2e8f823-2ee1-4450-8889-12f9cdd33dd4", 00:20:34.539 "is_configured": true, 00:20:34.539 "data_offset": 2048, 00:20:34.539 "data_size": 63488 00:20:34.539 }, 00:20:34.539 { 00:20:34.539 "name": "BaseBdev3", 00:20:34.539 "uuid": "556b3871-5a1a-4102-aa3e-681164a246c0", 00:20:34.539 "is_configured": true, 00:20:34.539 "data_offset": 2048, 00:20:34.539 "data_size": 63488 00:20:34.539 }, 00:20:34.539 { 00:20:34.539 "name": "BaseBdev4", 00:20:34.539 "uuid": "974e8861-6874-4c98-83a5-d2bde2cd2e68", 00:20:34.539 "is_configured": true, 00:20:34.539 "data_offset": 2048, 00:20:34.539 "data_size": 63488 00:20:34.539 } 00:20:34.539 ] 00:20:34.539 } 00:20:34.539 } 00:20:34.539 }' 00:20:34.539 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:34.797 BaseBdev2 00:20:34.797 BaseBdev3 00:20:34.797 BaseBdev4' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.797 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.798 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.798 [2024-11-20 11:32:42.636306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.798 [2024-11-20 11:32:42.636345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.798 [2024-11-20 11:32:42.636435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.798 [2024-11-20 11:32:42.636831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.798 [2024-11-20 11:32:42.636856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83746 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83746 ']' 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83746 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83746 00:20:35.056 killing process with pid 83746 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83746' 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83746 00:20:35.056 [2024-11-20 11:32:42.674758] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:35.056 11:32:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83746 00:20:35.315 [2024-11-20 11:32:43.041249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:36.690 11:32:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:36.690 00:20:36.690 real 0m12.706s 00:20:36.690 user 0m20.981s 00:20:36.690 sys 0m1.766s 00:20:36.690 11:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.690 11:32:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.690 ************************************ 00:20:36.690 END TEST raid5f_state_function_test_sb 00:20:36.691 ************************************ 00:20:36.691 11:32:44 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:20:36.691 11:32:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:36.691 11:32:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.691 11:32:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.691 ************************************ 00:20:36.691 START TEST raid5f_superblock_test 00:20:36.691 ************************************ 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84428 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84428 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84428 ']' 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.691 11:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.691 [2024-11-20 11:32:44.284976] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:20:36.691 [2024-11-20 11:32:44.285200] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84428 ] 00:20:36.691 [2024-11-20 11:32:44.472164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.950 [2024-11-20 11:32:44.616530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.208 [2024-11-20 11:32:44.819103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.208 [2024-11-20 11:32:44.819182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:37.777 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 malloc1 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 [2024-11-20 11:32:45.376218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:37.778 [2024-11-20 11:32:45.376293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.778 [2024-11-20 11:32:45.376329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:37.778 [2024-11-20 11:32:45.376345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.778 [2024-11-20 11:32:45.379110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.778 [2024-11-20 11:32:45.379156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:37.778 pt1 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 malloc2 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 [2024-11-20 11:32:45.429957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.778 [2024-11-20 11:32:45.430025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.778 [2024-11-20 11:32:45.430057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:37.778 [2024-11-20 11:32:45.430071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.778 [2024-11-20 11:32:45.433068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.778 [2024-11-20 11:32:45.433126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.778 pt2 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 malloc3 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 [2024-11-20 11:32:45.500878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:37.778 [2024-11-20 11:32:45.500968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.778 [2024-11-20 11:32:45.501025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:37.778 [2024-11-20 11:32:45.501041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.778 [2024-11-20 11:32:45.503966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.778 [2024-11-20 11:32:45.504037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:37.778 pt3 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 malloc4 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 [2024-11-20 11:32:45.556338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:37.778 [2024-11-20 11:32:45.556429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.778 [2024-11-20 11:32:45.556457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:37.778 [2024-11-20 11:32:45.556471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.778 [2024-11-20 11:32:45.559370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.778 [2024-11-20 11:32:45.559425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:37.778 pt4 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.778 [2024-11-20 11:32:45.568477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:37.778 [2024-11-20 11:32:45.571318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.778 [2024-11-20 11:32:45.571428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:37.778 [2024-11-20 11:32:45.571517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:37.778 [2024-11-20 11:32:45.571861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:37.778 [2024-11-20 11:32:45.571885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:37.778 [2024-11-20 11:32:45.572279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:37.778 [2024-11-20 11:32:45.579000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:37.778 [2024-11-20 11:32:45.579046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:37.778 [2024-11-20 11:32:45.579338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.778 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.779 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.779 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.779 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.779 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.779 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.779 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.037 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.037 "name": "raid_bdev1", 00:20:38.037 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:38.037 "strip_size_kb": 64, 00:20:38.037 "state": "online", 00:20:38.037 "raid_level": "raid5f", 00:20:38.037 "superblock": true, 00:20:38.037 "num_base_bdevs": 4, 00:20:38.037 "num_base_bdevs_discovered": 4, 00:20:38.037 "num_base_bdevs_operational": 4, 00:20:38.037 "base_bdevs_list": [ 00:20:38.037 { 00:20:38.037 "name": "pt1", 00:20:38.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:38.037 "is_configured": true, 00:20:38.037 "data_offset": 2048, 00:20:38.037 "data_size": 63488 00:20:38.037 }, 00:20:38.037 { 00:20:38.037 "name": "pt2", 00:20:38.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.037 "is_configured": true, 00:20:38.037 "data_offset": 2048, 00:20:38.037 "data_size": 63488 00:20:38.037 }, 00:20:38.037 { 00:20:38.037 "name": "pt3", 00:20:38.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.037 "is_configured": true, 00:20:38.037 "data_offset": 2048, 00:20:38.037 "data_size": 63488 00:20:38.037 }, 00:20:38.037 { 00:20:38.037 "name": "pt4", 00:20:38.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:38.037 "is_configured": true, 00:20:38.037 "data_offset": 2048, 00:20:38.037 "data_size": 63488 00:20:38.037 } 00:20:38.037 ] 00:20:38.037 }' 00:20:38.037 11:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.037 11:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.297 [2024-11-20 11:32:46.107523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.297 "name": "raid_bdev1", 00:20:38.297 "aliases": [ 00:20:38.297 "51b81455-3f74-4323-8e79-f1ed98edbbd1" 00:20:38.297 ], 00:20:38.297 "product_name": "Raid Volume", 00:20:38.297 "block_size": 512, 00:20:38.297 "num_blocks": 190464, 00:20:38.297 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:38.297 "assigned_rate_limits": { 00:20:38.297 "rw_ios_per_sec": 0, 00:20:38.297 "rw_mbytes_per_sec": 0, 00:20:38.297 "r_mbytes_per_sec": 0, 00:20:38.297 "w_mbytes_per_sec": 0 00:20:38.297 }, 00:20:38.297 "claimed": false, 00:20:38.297 "zoned": false, 00:20:38.297 "supported_io_types": { 00:20:38.297 "read": true, 00:20:38.297 "write": true, 00:20:38.297 "unmap": false, 00:20:38.297 "flush": false, 00:20:38.297 "reset": true, 00:20:38.297 "nvme_admin": false, 00:20:38.297 "nvme_io": false, 00:20:38.297 "nvme_io_md": false, 00:20:38.297 "write_zeroes": true, 00:20:38.297 "zcopy": false, 00:20:38.297 "get_zone_info": false, 00:20:38.297 "zone_management": false, 00:20:38.297 "zone_append": false, 00:20:38.297 "compare": false, 00:20:38.297 "compare_and_write": false, 00:20:38.297 "abort": false, 00:20:38.297 "seek_hole": false, 00:20:38.297 "seek_data": false, 00:20:38.297 "copy": false, 00:20:38.297 "nvme_iov_md": false 00:20:38.297 }, 00:20:38.297 "driver_specific": { 00:20:38.297 "raid": { 00:20:38.297 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:38.297 "strip_size_kb": 64, 00:20:38.297 "state": "online", 00:20:38.297 "raid_level": "raid5f", 00:20:38.297 "superblock": true, 00:20:38.297 "num_base_bdevs": 4, 00:20:38.297 "num_base_bdevs_discovered": 4, 00:20:38.297 "num_base_bdevs_operational": 4, 00:20:38.297 "base_bdevs_list": [ 00:20:38.297 { 00:20:38.297 "name": "pt1", 00:20:38.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:38.297 "is_configured": true, 00:20:38.297 "data_offset": 2048, 00:20:38.297 "data_size": 63488 00:20:38.297 }, 00:20:38.297 { 00:20:38.297 "name": "pt2", 00:20:38.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.297 "is_configured": true, 00:20:38.297 "data_offset": 2048, 00:20:38.297 "data_size": 63488 00:20:38.297 }, 00:20:38.297 { 00:20:38.297 "name": "pt3", 00:20:38.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.297 "is_configured": true, 00:20:38.297 "data_offset": 2048, 00:20:38.297 "data_size": 63488 00:20:38.297 }, 00:20:38.297 { 00:20:38.297 "name": "pt4", 00:20:38.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:38.297 "is_configured": true, 00:20:38.297 "data_offset": 2048, 00:20:38.297 "data_size": 63488 00:20:38.297 } 00:20:38.297 ] 00:20:38.297 } 00:20:38.297 } 00:20:38.297 }' 00:20:38.297 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:38.557 pt2 00:20:38.557 pt3 00:20:38.557 pt4' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.557 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.815 [2024-11-20 11:32:46.463605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51b81455-3f74-4323-8e79-f1ed98edbbd1 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51b81455-3f74-4323-8e79-f1ed98edbbd1 ']' 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.815 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.815 [2024-11-20 11:32:46.511393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:38.816 [2024-11-20 11:32:46.511441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:38.816 [2024-11-20 11:32:46.511543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.816 [2024-11-20 11:32:46.511672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.816 [2024-11-20 11:32:46.511698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:38.816 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.076 [2024-11-20 11:32:46.667443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:39.076 [2024-11-20 11:32:46.670125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:39.076 [2024-11-20 11:32:46.670215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:39.076 [2024-11-20 11:32:46.670271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:39.076 [2024-11-20 11:32:46.670348] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:39.076 [2024-11-20 11:32:46.670420] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:39.076 [2024-11-20 11:32:46.670454] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:39.076 [2024-11-20 11:32:46.670486] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:39.076 [2024-11-20 11:32:46.670509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.076 [2024-11-20 11:32:46.670525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:39.076 request: 00:20:39.076 { 00:20:39.076 "name": "raid_bdev1", 00:20:39.076 "raid_level": "raid5f", 00:20:39.076 "base_bdevs": [ 00:20:39.076 "malloc1", 00:20:39.076 "malloc2", 00:20:39.076 "malloc3", 00:20:39.076 "malloc4" 00:20:39.076 ], 00:20:39.076 "strip_size_kb": 64, 00:20:39.076 "superblock": false, 00:20:39.076 "method": "bdev_raid_create", 00:20:39.076 "req_id": 1 00:20:39.076 } 00:20:39.076 Got JSON-RPC error response 00:20:39.076 response: 00:20:39.076 { 00:20:39.076 "code": -17, 00:20:39.076 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:39.076 } 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.076 [2024-11-20 11:32:46.727408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:39.076 [2024-11-20 11:32:46.727498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.076 [2024-11-20 11:32:46.727536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:39.076 [2024-11-20 11:32:46.727562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.076 [2024-11-20 11:32:46.730735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.076 [2024-11-20 11:32:46.730787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:39.076 [2024-11-20 11:32:46.730890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:39.076 [2024-11-20 11:32:46.731027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:39.076 pt1 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.076 "name": "raid_bdev1", 00:20:39.076 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:39.076 "strip_size_kb": 64, 00:20:39.076 "state": "configuring", 00:20:39.076 "raid_level": "raid5f", 00:20:39.076 "superblock": true, 00:20:39.076 "num_base_bdevs": 4, 00:20:39.076 "num_base_bdevs_discovered": 1, 00:20:39.076 "num_base_bdevs_operational": 4, 00:20:39.076 "base_bdevs_list": [ 00:20:39.076 { 00:20:39.076 "name": "pt1", 00:20:39.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.076 "is_configured": true, 00:20:39.076 "data_offset": 2048, 00:20:39.076 "data_size": 63488 00:20:39.076 }, 00:20:39.076 { 00:20:39.076 "name": null, 00:20:39.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.076 "is_configured": false, 00:20:39.076 "data_offset": 2048, 00:20:39.076 "data_size": 63488 00:20:39.076 }, 00:20:39.076 { 00:20:39.076 "name": null, 00:20:39.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.076 "is_configured": false, 00:20:39.076 "data_offset": 2048, 00:20:39.076 "data_size": 63488 00:20:39.076 }, 00:20:39.076 { 00:20:39.076 "name": null, 00:20:39.076 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.076 "is_configured": false, 00:20:39.076 "data_offset": 2048, 00:20:39.076 "data_size": 63488 00:20:39.076 } 00:20:39.076 ] 00:20:39.076 }' 00:20:39.076 11:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.077 11:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.645 [2024-11-20 11:32:47.223602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:39.645 [2024-11-20 11:32:47.223706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.645 [2024-11-20 11:32:47.223736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:39.645 [2024-11-20 11:32:47.223753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.645 [2024-11-20 11:32:47.224301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.645 [2024-11-20 11:32:47.224342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:39.645 [2024-11-20 11:32:47.224459] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:39.645 [2024-11-20 11:32:47.224496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:39.645 pt2 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.645 [2024-11-20 11:32:47.231576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.645 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.646 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.646 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.646 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.646 "name": "raid_bdev1", 00:20:39.646 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:39.646 "strip_size_kb": 64, 00:20:39.646 "state": "configuring", 00:20:39.646 "raid_level": "raid5f", 00:20:39.646 "superblock": true, 00:20:39.646 "num_base_bdevs": 4, 00:20:39.646 "num_base_bdevs_discovered": 1, 00:20:39.646 "num_base_bdevs_operational": 4, 00:20:39.646 "base_bdevs_list": [ 00:20:39.646 { 00:20:39.646 "name": "pt1", 00:20:39.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.646 "is_configured": true, 00:20:39.646 "data_offset": 2048, 00:20:39.646 "data_size": 63488 00:20:39.646 }, 00:20:39.646 { 00:20:39.646 "name": null, 00:20:39.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.646 "is_configured": false, 00:20:39.646 "data_offset": 0, 00:20:39.646 "data_size": 63488 00:20:39.646 }, 00:20:39.646 { 00:20:39.646 "name": null, 00:20:39.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.646 "is_configured": false, 00:20:39.646 "data_offset": 2048, 00:20:39.646 "data_size": 63488 00:20:39.646 }, 00:20:39.646 { 00:20:39.646 "name": null, 00:20:39.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.646 "is_configured": false, 00:20:39.646 "data_offset": 2048, 00:20:39.646 "data_size": 63488 00:20:39.646 } 00:20:39.646 ] 00:20:39.646 }' 00:20:39.646 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.646 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.923 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:39.923 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:39.923 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:39.923 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.923 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.923 [2024-11-20 11:32:47.751740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:39.923 [2024-11-20 11:32:47.751820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.923 [2024-11-20 11:32:47.751851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:39.923 [2024-11-20 11:32:47.751866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.212 [2024-11-20 11:32:47.752440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.212 [2024-11-20 11:32:47.752465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.212 [2024-11-20 11:32:47.752572] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:40.212 [2024-11-20 11:32:47.752604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.212 pt2 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.212 [2024-11-20 11:32:47.763768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:40.212 [2024-11-20 11:32:47.763857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.212 [2024-11-20 11:32:47.763890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:40.212 [2024-11-20 11:32:47.763905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.212 [2024-11-20 11:32:47.764460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.212 [2024-11-20 11:32:47.764495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:40.212 [2024-11-20 11:32:47.764603] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:40.212 [2024-11-20 11:32:47.764652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:40.212 pt3 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.212 [2024-11-20 11:32:47.775717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:40.212 [2024-11-20 11:32:47.775814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.212 [2024-11-20 11:32:47.775847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:40.212 [2024-11-20 11:32:47.775862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.212 [2024-11-20 11:32:47.776439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.212 [2024-11-20 11:32:47.776473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:40.212 [2024-11-20 11:32:47.776581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:40.212 [2024-11-20 11:32:47.776628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:40.212 [2024-11-20 11:32:47.776827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:40.212 [2024-11-20 11:32:47.776844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:40.212 [2024-11-20 11:32:47.777141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:40.212 [2024-11-20 11:32:47.783807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:40.212 [2024-11-20 11:32:47.783850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:40.212 [2024-11-20 11:32:47.784133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.212 pt4 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.212 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.213 "name": "raid_bdev1", 00:20:40.213 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:40.213 "strip_size_kb": 64, 00:20:40.213 "state": "online", 00:20:40.213 "raid_level": "raid5f", 00:20:40.213 "superblock": true, 00:20:40.213 "num_base_bdevs": 4, 00:20:40.213 "num_base_bdevs_discovered": 4, 00:20:40.213 "num_base_bdevs_operational": 4, 00:20:40.213 "base_bdevs_list": [ 00:20:40.213 { 00:20:40.213 "name": "pt1", 00:20:40.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.213 "is_configured": true, 00:20:40.213 "data_offset": 2048, 00:20:40.213 "data_size": 63488 00:20:40.213 }, 00:20:40.213 { 00:20:40.213 "name": "pt2", 00:20:40.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.213 "is_configured": true, 00:20:40.213 "data_offset": 2048, 00:20:40.213 "data_size": 63488 00:20:40.213 }, 00:20:40.213 { 00:20:40.213 "name": "pt3", 00:20:40.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.213 "is_configured": true, 00:20:40.213 "data_offset": 2048, 00:20:40.213 "data_size": 63488 00:20:40.213 }, 00:20:40.213 { 00:20:40.213 "name": "pt4", 00:20:40.213 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:40.213 "is_configured": true, 00:20:40.213 "data_offset": 2048, 00:20:40.213 "data_size": 63488 00:20:40.213 } 00:20:40.213 ] 00:20:40.213 }' 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.213 11:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.471 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.471 [2024-11-20 11:32:48.312043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.730 "name": "raid_bdev1", 00:20:40.730 "aliases": [ 00:20:40.730 "51b81455-3f74-4323-8e79-f1ed98edbbd1" 00:20:40.730 ], 00:20:40.730 "product_name": "Raid Volume", 00:20:40.730 "block_size": 512, 00:20:40.730 "num_blocks": 190464, 00:20:40.730 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:40.730 "assigned_rate_limits": { 00:20:40.730 "rw_ios_per_sec": 0, 00:20:40.730 "rw_mbytes_per_sec": 0, 00:20:40.730 "r_mbytes_per_sec": 0, 00:20:40.730 "w_mbytes_per_sec": 0 00:20:40.730 }, 00:20:40.730 "claimed": false, 00:20:40.730 "zoned": false, 00:20:40.730 "supported_io_types": { 00:20:40.730 "read": true, 00:20:40.730 "write": true, 00:20:40.730 "unmap": false, 00:20:40.730 "flush": false, 00:20:40.730 "reset": true, 00:20:40.730 "nvme_admin": false, 00:20:40.730 "nvme_io": false, 00:20:40.730 "nvme_io_md": false, 00:20:40.730 "write_zeroes": true, 00:20:40.730 "zcopy": false, 00:20:40.730 "get_zone_info": false, 00:20:40.730 "zone_management": false, 00:20:40.730 "zone_append": false, 00:20:40.730 "compare": false, 00:20:40.730 "compare_and_write": false, 00:20:40.730 "abort": false, 00:20:40.730 "seek_hole": false, 00:20:40.730 "seek_data": false, 00:20:40.730 "copy": false, 00:20:40.730 "nvme_iov_md": false 00:20:40.730 }, 00:20:40.730 "driver_specific": { 00:20:40.730 "raid": { 00:20:40.730 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:40.730 "strip_size_kb": 64, 00:20:40.730 "state": "online", 00:20:40.730 "raid_level": "raid5f", 00:20:40.730 "superblock": true, 00:20:40.730 "num_base_bdevs": 4, 00:20:40.730 "num_base_bdevs_discovered": 4, 00:20:40.730 "num_base_bdevs_operational": 4, 00:20:40.730 "base_bdevs_list": [ 00:20:40.730 { 00:20:40.730 "name": "pt1", 00:20:40.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.730 "is_configured": true, 00:20:40.730 "data_offset": 2048, 00:20:40.730 "data_size": 63488 00:20:40.730 }, 00:20:40.730 { 00:20:40.730 "name": "pt2", 00:20:40.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.730 "is_configured": true, 00:20:40.730 "data_offset": 2048, 00:20:40.730 "data_size": 63488 00:20:40.730 }, 00:20:40.730 { 00:20:40.730 "name": "pt3", 00:20:40.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.730 "is_configured": true, 00:20:40.730 "data_offset": 2048, 00:20:40.730 "data_size": 63488 00:20:40.730 }, 00:20:40.730 { 00:20:40.730 "name": "pt4", 00:20:40.730 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:40.730 "is_configured": true, 00:20:40.730 "data_offset": 2048, 00:20:40.730 "data_size": 63488 00:20:40.730 } 00:20:40.730 ] 00:20:40.730 } 00:20:40.730 } 00:20:40.730 }' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:40.730 pt2 00:20:40.730 pt3 00:20:40.730 pt4' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.730 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.989 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:40.990 [2024-11-20 11:32:48.704149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51b81455-3f74-4323-8e79-f1ed98edbbd1 '!=' 51b81455-3f74-4323-8e79-f1ed98edbbd1 ']' 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.990 [2024-11-20 11:32:48.756016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.990 "name": "raid_bdev1", 00:20:40.990 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:40.990 "strip_size_kb": 64, 00:20:40.990 "state": "online", 00:20:40.990 "raid_level": "raid5f", 00:20:40.990 "superblock": true, 00:20:40.990 "num_base_bdevs": 4, 00:20:40.990 "num_base_bdevs_discovered": 3, 00:20:40.990 "num_base_bdevs_operational": 3, 00:20:40.990 "base_bdevs_list": [ 00:20:40.990 { 00:20:40.990 "name": null, 00:20:40.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.990 "is_configured": false, 00:20:40.990 "data_offset": 0, 00:20:40.990 "data_size": 63488 00:20:40.990 }, 00:20:40.990 { 00:20:40.990 "name": "pt2", 00:20:40.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.990 "is_configured": true, 00:20:40.990 "data_offset": 2048, 00:20:40.990 "data_size": 63488 00:20:40.990 }, 00:20:40.990 { 00:20:40.990 "name": "pt3", 00:20:40.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.990 "is_configured": true, 00:20:40.990 "data_offset": 2048, 00:20:40.990 "data_size": 63488 00:20:40.990 }, 00:20:40.990 { 00:20:40.990 "name": "pt4", 00:20:40.990 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:40.990 "is_configured": true, 00:20:40.990 "data_offset": 2048, 00:20:40.990 "data_size": 63488 00:20:40.990 } 00:20:40.990 ] 00:20:40.990 }' 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.990 11:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.563 [2024-11-20 11:32:49.284129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.563 [2024-11-20 11:32:49.284188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.563 [2024-11-20 11:32:49.284318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.563 [2024-11-20 11:32:49.284425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.563 [2024-11-20 11:32:49.284441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.563 [2024-11-20 11:32:49.380103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:41.563 [2024-11-20 11:32:49.380194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.563 [2024-11-20 11:32:49.380221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:41.563 [2024-11-20 11:32:49.380234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.563 [2024-11-20 11:32:49.383350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.563 [2024-11-20 11:32:49.383395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:41.563 [2024-11-20 11:32:49.383500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:41.563 [2024-11-20 11:32:49.383561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:41.563 pt2 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.563 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.822 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.822 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.822 "name": "raid_bdev1", 00:20:41.822 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:41.822 "strip_size_kb": 64, 00:20:41.822 "state": "configuring", 00:20:41.822 "raid_level": "raid5f", 00:20:41.822 "superblock": true, 00:20:41.822 "num_base_bdevs": 4, 00:20:41.822 "num_base_bdevs_discovered": 1, 00:20:41.822 "num_base_bdevs_operational": 3, 00:20:41.822 "base_bdevs_list": [ 00:20:41.822 { 00:20:41.822 "name": null, 00:20:41.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.822 "is_configured": false, 00:20:41.822 "data_offset": 2048, 00:20:41.822 "data_size": 63488 00:20:41.822 }, 00:20:41.822 { 00:20:41.822 "name": "pt2", 00:20:41.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.822 "is_configured": true, 00:20:41.822 "data_offset": 2048, 00:20:41.822 "data_size": 63488 00:20:41.822 }, 00:20:41.822 { 00:20:41.822 "name": null, 00:20:41.822 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.822 "is_configured": false, 00:20:41.822 "data_offset": 2048, 00:20:41.822 "data_size": 63488 00:20:41.822 }, 00:20:41.822 { 00:20:41.822 "name": null, 00:20:41.822 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.822 "is_configured": false, 00:20:41.822 "data_offset": 2048, 00:20:41.822 "data_size": 63488 00:20:41.822 } 00:20:41.822 ] 00:20:41.822 }' 00:20:41.822 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.822 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.081 [2024-11-20 11:32:49.904400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:42.081 [2024-11-20 11:32:49.904472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.081 [2024-11-20 11:32:49.904506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:42.081 [2024-11-20 11:32:49.904520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.081 [2024-11-20 11:32:49.905122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.081 [2024-11-20 11:32:49.905167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:42.081 [2024-11-20 11:32:49.905284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:42.081 [2024-11-20 11:32:49.905325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:42.081 pt3 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.081 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.341 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.341 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.341 "name": "raid_bdev1", 00:20:42.341 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:42.341 "strip_size_kb": 64, 00:20:42.341 "state": "configuring", 00:20:42.341 "raid_level": "raid5f", 00:20:42.341 "superblock": true, 00:20:42.341 "num_base_bdevs": 4, 00:20:42.341 "num_base_bdevs_discovered": 2, 00:20:42.341 "num_base_bdevs_operational": 3, 00:20:42.341 "base_bdevs_list": [ 00:20:42.341 { 00:20:42.341 "name": null, 00:20:42.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.341 "is_configured": false, 00:20:42.341 "data_offset": 2048, 00:20:42.341 "data_size": 63488 00:20:42.341 }, 00:20:42.341 { 00:20:42.341 "name": "pt2", 00:20:42.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.341 "is_configured": true, 00:20:42.341 "data_offset": 2048, 00:20:42.341 "data_size": 63488 00:20:42.341 }, 00:20:42.341 { 00:20:42.341 "name": "pt3", 00:20:42.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.341 "is_configured": true, 00:20:42.341 "data_offset": 2048, 00:20:42.341 "data_size": 63488 00:20:42.341 }, 00:20:42.341 { 00:20:42.341 "name": null, 00:20:42.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:42.341 "is_configured": false, 00:20:42.341 "data_offset": 2048, 00:20:42.341 "data_size": 63488 00:20:42.341 } 00:20:42.341 ] 00:20:42.341 }' 00:20:42.341 11:32:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.341 11:32:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.600 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:42.600 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:42.600 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:20:42.600 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:42.600 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.600 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.600 [2024-11-20 11:32:50.416531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:42.600 [2024-11-20 11:32:50.416605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.600 [2024-11-20 11:32:50.416657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:42.600 [2024-11-20 11:32:50.416672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.600 [2024-11-20 11:32:50.417259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.600 [2024-11-20 11:32:50.417291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:42.600 [2024-11-20 11:32:50.417403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:42.600 [2024-11-20 11:32:50.417436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:42.600 [2024-11-20 11:32:50.417602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:42.600 [2024-11-20 11:32:50.417636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:42.600 [2024-11-20 11:32:50.417936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:42.601 [2024-11-20 11:32:50.424297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:42.601 [2024-11-20 11:32:50.424331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:42.601 [2024-11-20 11:32:50.424687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.601 pt4 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.601 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.860 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.860 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.860 "name": "raid_bdev1", 00:20:42.860 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:42.860 "strip_size_kb": 64, 00:20:42.860 "state": "online", 00:20:42.860 "raid_level": "raid5f", 00:20:42.860 "superblock": true, 00:20:42.860 "num_base_bdevs": 4, 00:20:42.860 "num_base_bdevs_discovered": 3, 00:20:42.860 "num_base_bdevs_operational": 3, 00:20:42.860 "base_bdevs_list": [ 00:20:42.860 { 00:20:42.860 "name": null, 00:20:42.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.860 "is_configured": false, 00:20:42.860 "data_offset": 2048, 00:20:42.860 "data_size": 63488 00:20:42.860 }, 00:20:42.860 { 00:20:42.860 "name": "pt2", 00:20:42.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.860 "is_configured": true, 00:20:42.860 "data_offset": 2048, 00:20:42.860 "data_size": 63488 00:20:42.860 }, 00:20:42.860 { 00:20:42.860 "name": "pt3", 00:20:42.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.860 "is_configured": true, 00:20:42.860 "data_offset": 2048, 00:20:42.860 "data_size": 63488 00:20:42.860 }, 00:20:42.860 { 00:20:42.860 "name": "pt4", 00:20:42.860 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:42.860 "is_configured": true, 00:20:42.860 "data_offset": 2048, 00:20:42.860 "data_size": 63488 00:20:42.860 } 00:20:42.860 ] 00:20:42.860 }' 00:20:42.860 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.860 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.119 [2024-11-20 11:32:50.944135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.119 [2024-11-20 11:32:50.944344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.119 [2024-11-20 11:32:50.944473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.119 [2024-11-20 11:32:50.944571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.119 [2024-11-20 11:32:50.944592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.119 11:32:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.379 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:43.379 11:32:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.379 [2024-11-20 11:32:51.020130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:43.379 [2024-11-20 11:32:51.020219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.379 [2024-11-20 11:32:51.020253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:43.379 [2024-11-20 11:32:51.020270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.379 [2024-11-20 11:32:51.023571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.379 [2024-11-20 11:32:51.023739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:43.379 [2024-11-20 11:32:51.023960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:43.379 [2024-11-20 11:32:51.024141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:43.379 [2024-11-20 11:32:51.024357] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:43.379 pt1 00:20:43.379 [2024-11-20 11:32:51.024509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.379 [2024-11-20 11:32:51.024544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:43.379 [2024-11-20 11:32:51.024646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:43.379 [2024-11-20 11:32:51.024849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.379 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.380 "name": "raid_bdev1", 00:20:43.380 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:43.380 "strip_size_kb": 64, 00:20:43.380 "state": "configuring", 00:20:43.380 "raid_level": "raid5f", 00:20:43.380 "superblock": true, 00:20:43.380 "num_base_bdevs": 4, 00:20:43.380 "num_base_bdevs_discovered": 2, 00:20:43.380 "num_base_bdevs_operational": 3, 00:20:43.380 "base_bdevs_list": [ 00:20:43.380 { 00:20:43.380 "name": null, 00:20:43.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.380 "is_configured": false, 00:20:43.380 "data_offset": 2048, 00:20:43.380 "data_size": 63488 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "name": "pt2", 00:20:43.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.380 "is_configured": true, 00:20:43.380 "data_offset": 2048, 00:20:43.380 "data_size": 63488 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "name": "pt3", 00:20:43.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:43.380 "is_configured": true, 00:20:43.380 "data_offset": 2048, 00:20:43.380 "data_size": 63488 00:20:43.380 }, 00:20:43.380 { 00:20:43.380 "name": null, 00:20:43.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:43.380 "is_configured": false, 00:20:43.380 "data_offset": 2048, 00:20:43.380 "data_size": 63488 00:20:43.380 } 00:20:43.380 ] 00:20:43.380 }' 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.380 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.706 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:43.706 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.706 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.706 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.966 [2024-11-20 11:32:51.588714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:43.966 [2024-11-20 11:32:51.588833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.966 [2024-11-20 11:32:51.588870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:43.966 [2024-11-20 11:32:51.588886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.966 [2024-11-20 11:32:51.589492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.966 [2024-11-20 11:32:51.589516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:43.966 [2024-11-20 11:32:51.589619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:43.966 [2024-11-20 11:32:51.589691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:43.966 [2024-11-20 11:32:51.589881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:43.966 [2024-11-20 11:32:51.589898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:43.966 [2024-11-20 11:32:51.590250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:43.966 [2024-11-20 11:32:51.597192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:43.966 [2024-11-20 11:32:51.597363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:43.966 [2024-11-20 11:32:51.597866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.966 pt4 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.966 "name": "raid_bdev1", 00:20:43.966 "uuid": "51b81455-3f74-4323-8e79-f1ed98edbbd1", 00:20:43.966 "strip_size_kb": 64, 00:20:43.966 "state": "online", 00:20:43.966 "raid_level": "raid5f", 00:20:43.966 "superblock": true, 00:20:43.966 "num_base_bdevs": 4, 00:20:43.966 "num_base_bdevs_discovered": 3, 00:20:43.966 "num_base_bdevs_operational": 3, 00:20:43.966 "base_bdevs_list": [ 00:20:43.966 { 00:20:43.966 "name": null, 00:20:43.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.966 "is_configured": false, 00:20:43.966 "data_offset": 2048, 00:20:43.966 "data_size": 63488 00:20:43.966 }, 00:20:43.966 { 00:20:43.966 "name": "pt2", 00:20:43.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.966 "is_configured": true, 00:20:43.966 "data_offset": 2048, 00:20:43.966 "data_size": 63488 00:20:43.966 }, 00:20:43.966 { 00:20:43.966 "name": "pt3", 00:20:43.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:43.966 "is_configured": true, 00:20:43.966 "data_offset": 2048, 00:20:43.966 "data_size": 63488 00:20:43.966 }, 00:20:43.966 { 00:20:43.966 "name": "pt4", 00:20:43.966 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:43.966 "is_configured": true, 00:20:43.966 "data_offset": 2048, 00:20:43.966 "data_size": 63488 00:20:43.966 } 00:20:43.966 ] 00:20:43.966 }' 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.966 11:32:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:44.535 [2024-11-20 11:32:52.162131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 51b81455-3f74-4323-8e79-f1ed98edbbd1 '!=' 51b81455-3f74-4323-8e79-f1ed98edbbd1 ']' 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84428 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84428 ']' 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84428 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84428 00:20:44.535 killing process with pid 84428 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84428' 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84428 00:20:44.535 [2024-11-20 11:32:52.240038] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:44.535 11:32:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84428 00:20:44.535 [2024-11-20 11:32:52.240170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.535 [2024-11-20 11:32:52.240269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.535 [2024-11-20 11:32:52.240290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:44.794 [2024-11-20 11:32:52.591611] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:46.172 ************************************ 00:20:46.172 END TEST raid5f_superblock_test 00:20:46.172 ************************************ 00:20:46.172 11:32:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:46.172 00:20:46.172 real 0m9.439s 00:20:46.172 user 0m15.505s 00:20:46.172 sys 0m1.368s 00:20:46.172 11:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.172 11:32:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.172 11:32:53 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:46.172 11:32:53 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:20:46.172 11:32:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:46.172 11:32:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.172 11:32:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.172 ************************************ 00:20:46.172 START TEST raid5f_rebuild_test 00:20:46.172 ************************************ 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84919 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84919 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84919 ']' 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.172 11:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.172 [2024-11-20 11:32:53.775038] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:20:46.173 [2024-11-20 11:32:53.775377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84919 ] 00:20:46.173 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:46.173 Zero copy mechanism will not be used. 00:20:46.173 [2024-11-20 11:32:53.947229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.432 [2024-11-20 11:32:54.079485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.690 [2024-11-20 11:32:54.285945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.690 [2024-11-20 11:32:54.286254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.949 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.949 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:46.949 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:46.949 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:46.950 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.950 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 BaseBdev1_malloc 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 [2024-11-20 11:32:54.834041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:47.215 [2024-11-20 11:32:54.834160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.215 [2024-11-20 11:32:54.834193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:47.215 [2024-11-20 11:32:54.834212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.215 [2024-11-20 11:32:54.837137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.215 [2024-11-20 11:32:54.837200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:47.215 BaseBdev1 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 BaseBdev2_malloc 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 [2024-11-20 11:32:54.885737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:47.215 [2024-11-20 11:32:54.885826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.215 [2024-11-20 11:32:54.885862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:47.215 [2024-11-20 11:32:54.885881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.215 [2024-11-20 11:32:54.888796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.215 [2024-11-20 11:32:54.888844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:47.215 BaseBdev2 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 BaseBdev3_malloc 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 [2024-11-20 11:32:54.948020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:47.215 [2024-11-20 11:32:54.948286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.215 [2024-11-20 11:32:54.948329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:47.215 [2024-11-20 11:32:54.948350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.215 [2024-11-20 11:32:54.951201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.215 [2024-11-20 11:32:54.951424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:47.215 BaseBdev3 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 BaseBdev4_malloc 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 [2024-11-20 11:32:55.001494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:47.215 [2024-11-20 11:32:55.001566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.215 [2024-11-20 11:32:55.001594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:47.215 [2024-11-20 11:32:55.001612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.215 [2024-11-20 11:32:55.004487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.215 [2024-11-20 11:32:55.004555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:47.215 BaseBdev4 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.215 spare_malloc 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.215 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 spare_delay 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 [2024-11-20 11:32:55.062281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:47.511 [2024-11-20 11:32:55.062356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.511 [2024-11-20 11:32:55.062386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:47.511 [2024-11-20 11:32:55.062403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.511 [2024-11-20 11:32:55.065156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.511 [2024-11-20 11:32:55.065340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:47.511 spare 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 [2024-11-20 11:32:55.070341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.511 [2024-11-20 11:32:55.072927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.511 [2024-11-20 11:32:55.073134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.511 [2024-11-20 11:32:55.073264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:47.511 [2024-11-20 11:32:55.073473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:47.511 [2024-11-20 11:32:55.073603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:47.511 [2024-11-20 11:32:55.073977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:47.511 [2024-11-20 11:32:55.080865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:47.511 [2024-11-20 11:32:55.081012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:47.511 [2024-11-20 11:32:55.081404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.511 "name": "raid_bdev1", 00:20:47.511 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:47.511 "strip_size_kb": 64, 00:20:47.511 "state": "online", 00:20:47.511 "raid_level": "raid5f", 00:20:47.511 "superblock": false, 00:20:47.511 "num_base_bdevs": 4, 00:20:47.511 "num_base_bdevs_discovered": 4, 00:20:47.511 "num_base_bdevs_operational": 4, 00:20:47.511 "base_bdevs_list": [ 00:20:47.511 { 00:20:47.511 "name": "BaseBdev1", 00:20:47.511 "uuid": "1de9cb26-60ac-5e1e-8c62-76edd3cebb9d", 00:20:47.511 "is_configured": true, 00:20:47.511 "data_offset": 0, 00:20:47.511 "data_size": 65536 00:20:47.511 }, 00:20:47.511 { 00:20:47.511 "name": "BaseBdev2", 00:20:47.511 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:47.511 "is_configured": true, 00:20:47.511 "data_offset": 0, 00:20:47.511 "data_size": 65536 00:20:47.511 }, 00:20:47.511 { 00:20:47.511 "name": "BaseBdev3", 00:20:47.511 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:47.511 "is_configured": true, 00:20:47.511 "data_offset": 0, 00:20:47.511 "data_size": 65536 00:20:47.511 }, 00:20:47.511 { 00:20:47.511 "name": "BaseBdev4", 00:20:47.511 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:47.511 "is_configured": true, 00:20:47.511 "data_offset": 0, 00:20:47.511 "data_size": 65536 00:20:47.511 } 00:20:47.511 ] 00:20:47.511 }' 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.511 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:47.785 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:47.785 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.785 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 [2024-11-20 11:32:55.585532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.785 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:48.044 11:32:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:48.302 [2024-11-20 11:32:56.017452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:48.302 /dev/nbd0 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.302 1+0 records in 00:20:48.302 1+0 records out 00:20:48.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272255 s, 15.0 MB/s 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:48.302 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:20:49.238 512+0 records in 00:20:49.238 512+0 records out 00:20:49.238 100663296 bytes (101 MB, 96 MiB) copied, 0.634626 s, 159 MB/s 00:20:49.238 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:49.238 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:49.238 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:49.238 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:49.238 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:49.238 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:49.238 11:32:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:49.238 [2024-11-20 11:32:57.021037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.238 [2024-11-20 11:32:57.060906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.238 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.497 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.497 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.497 "name": "raid_bdev1", 00:20:49.497 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:49.497 "strip_size_kb": 64, 00:20:49.497 "state": "online", 00:20:49.497 "raid_level": "raid5f", 00:20:49.497 "superblock": false, 00:20:49.497 "num_base_bdevs": 4, 00:20:49.497 "num_base_bdevs_discovered": 3, 00:20:49.497 "num_base_bdevs_operational": 3, 00:20:49.497 "base_bdevs_list": [ 00:20:49.497 { 00:20:49.497 "name": null, 00:20:49.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.497 "is_configured": false, 00:20:49.497 "data_offset": 0, 00:20:49.497 "data_size": 65536 00:20:49.497 }, 00:20:49.497 { 00:20:49.498 "name": "BaseBdev2", 00:20:49.498 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:49.498 "is_configured": true, 00:20:49.498 "data_offset": 0, 00:20:49.498 "data_size": 65536 00:20:49.498 }, 00:20:49.498 { 00:20:49.498 "name": "BaseBdev3", 00:20:49.498 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:49.498 "is_configured": true, 00:20:49.498 "data_offset": 0, 00:20:49.498 "data_size": 65536 00:20:49.498 }, 00:20:49.498 { 00:20:49.498 "name": "BaseBdev4", 00:20:49.498 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:49.498 "is_configured": true, 00:20:49.498 "data_offset": 0, 00:20:49.498 "data_size": 65536 00:20:49.498 } 00:20:49.498 ] 00:20:49.498 }' 00:20:49.498 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.498 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.757 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.757 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.757 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.757 [2024-11-20 11:32:57.581051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.757 [2024-11-20 11:32:57.595545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:49.757 11:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.757 11:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:50.015 [2024-11-20 11:32:57.604870] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.951 "name": "raid_bdev1", 00:20:50.951 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:50.951 "strip_size_kb": 64, 00:20:50.951 "state": "online", 00:20:50.951 "raid_level": "raid5f", 00:20:50.951 "superblock": false, 00:20:50.951 "num_base_bdevs": 4, 00:20:50.951 "num_base_bdevs_discovered": 4, 00:20:50.951 "num_base_bdevs_operational": 4, 00:20:50.951 "process": { 00:20:50.951 "type": "rebuild", 00:20:50.951 "target": "spare", 00:20:50.951 "progress": { 00:20:50.951 "blocks": 17280, 00:20:50.951 "percent": 8 00:20:50.951 } 00:20:50.951 }, 00:20:50.951 "base_bdevs_list": [ 00:20:50.951 { 00:20:50.951 "name": "spare", 00:20:50.951 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:50.951 "is_configured": true, 00:20:50.951 "data_offset": 0, 00:20:50.951 "data_size": 65536 00:20:50.951 }, 00:20:50.951 { 00:20:50.951 "name": "BaseBdev2", 00:20:50.951 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:50.951 "is_configured": true, 00:20:50.951 "data_offset": 0, 00:20:50.951 "data_size": 65536 00:20:50.951 }, 00:20:50.951 { 00:20:50.951 "name": "BaseBdev3", 00:20:50.951 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:50.951 "is_configured": true, 00:20:50.951 "data_offset": 0, 00:20:50.951 "data_size": 65536 00:20:50.951 }, 00:20:50.951 { 00:20:50.951 "name": "BaseBdev4", 00:20:50.951 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:50.951 "is_configured": true, 00:20:50.951 "data_offset": 0, 00:20:50.951 "data_size": 65536 00:20:50.951 } 00:20:50.951 ] 00:20:50.951 }' 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.951 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.951 [2024-11-20 11:32:58.758469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.210 [2024-11-20 11:32:58.818481] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:51.210 [2024-11-20 11:32:58.818599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.210 [2024-11-20 11:32:58.818649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.210 [2024-11-20 11:32:58.818680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.210 "name": "raid_bdev1", 00:20:51.210 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:51.210 "strip_size_kb": 64, 00:20:51.210 "state": "online", 00:20:51.210 "raid_level": "raid5f", 00:20:51.210 "superblock": false, 00:20:51.210 "num_base_bdevs": 4, 00:20:51.210 "num_base_bdevs_discovered": 3, 00:20:51.210 "num_base_bdevs_operational": 3, 00:20:51.210 "base_bdevs_list": [ 00:20:51.210 { 00:20:51.210 "name": null, 00:20:51.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.210 "is_configured": false, 00:20:51.210 "data_offset": 0, 00:20:51.210 "data_size": 65536 00:20:51.210 }, 00:20:51.210 { 00:20:51.210 "name": "BaseBdev2", 00:20:51.210 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:51.210 "is_configured": true, 00:20:51.210 "data_offset": 0, 00:20:51.210 "data_size": 65536 00:20:51.210 }, 00:20:51.210 { 00:20:51.210 "name": "BaseBdev3", 00:20:51.210 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:51.210 "is_configured": true, 00:20:51.210 "data_offset": 0, 00:20:51.210 "data_size": 65536 00:20:51.210 }, 00:20:51.210 { 00:20:51.210 "name": "BaseBdev4", 00:20:51.210 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:51.210 "is_configured": true, 00:20:51.210 "data_offset": 0, 00:20:51.210 "data_size": 65536 00:20:51.210 } 00:20:51.210 ] 00:20:51.210 }' 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.210 11:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.777 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.777 "name": "raid_bdev1", 00:20:51.777 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:51.777 "strip_size_kb": 64, 00:20:51.777 "state": "online", 00:20:51.777 "raid_level": "raid5f", 00:20:51.777 "superblock": false, 00:20:51.777 "num_base_bdevs": 4, 00:20:51.777 "num_base_bdevs_discovered": 3, 00:20:51.777 "num_base_bdevs_operational": 3, 00:20:51.777 "base_bdevs_list": [ 00:20:51.777 { 00:20:51.777 "name": null, 00:20:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.777 "is_configured": false, 00:20:51.777 "data_offset": 0, 00:20:51.777 "data_size": 65536 00:20:51.777 }, 00:20:51.777 { 00:20:51.777 "name": "BaseBdev2", 00:20:51.777 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:51.777 "is_configured": true, 00:20:51.777 "data_offset": 0, 00:20:51.777 "data_size": 65536 00:20:51.777 }, 00:20:51.777 { 00:20:51.777 "name": "BaseBdev3", 00:20:51.777 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:51.777 "is_configured": true, 00:20:51.777 "data_offset": 0, 00:20:51.777 "data_size": 65536 00:20:51.777 }, 00:20:51.777 { 00:20:51.777 "name": "BaseBdev4", 00:20:51.778 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:51.778 "is_configured": true, 00:20:51.778 "data_offset": 0, 00:20:51.778 "data_size": 65536 00:20:51.778 } 00:20:51.778 ] 00:20:51.778 }' 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.778 [2024-11-20 11:32:59.510399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.778 [2024-11-20 11:32:59.523870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.778 11:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:51.778 [2024-11-20 11:32:59.532656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.713 11:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.971 "name": "raid_bdev1", 00:20:52.971 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:52.971 "strip_size_kb": 64, 00:20:52.971 "state": "online", 00:20:52.971 "raid_level": "raid5f", 00:20:52.971 "superblock": false, 00:20:52.971 "num_base_bdevs": 4, 00:20:52.971 "num_base_bdevs_discovered": 4, 00:20:52.971 "num_base_bdevs_operational": 4, 00:20:52.971 "process": { 00:20:52.971 "type": "rebuild", 00:20:52.971 "target": "spare", 00:20:52.971 "progress": { 00:20:52.971 "blocks": 17280, 00:20:52.971 "percent": 8 00:20:52.971 } 00:20:52.971 }, 00:20:52.971 "base_bdevs_list": [ 00:20:52.971 { 00:20:52.971 "name": "spare", 00:20:52.971 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:52.971 "is_configured": true, 00:20:52.971 "data_offset": 0, 00:20:52.971 "data_size": 65536 00:20:52.971 }, 00:20:52.971 { 00:20:52.971 "name": "BaseBdev2", 00:20:52.971 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:52.971 "is_configured": true, 00:20:52.971 "data_offset": 0, 00:20:52.971 "data_size": 65536 00:20:52.971 }, 00:20:52.971 { 00:20:52.971 "name": "BaseBdev3", 00:20:52.971 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:52.971 "is_configured": true, 00:20:52.971 "data_offset": 0, 00:20:52.971 "data_size": 65536 00:20:52.971 }, 00:20:52.971 { 00:20:52.971 "name": "BaseBdev4", 00:20:52.971 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:52.971 "is_configured": true, 00:20:52.971 "data_offset": 0, 00:20:52.971 "data_size": 65536 00:20:52.971 } 00:20:52.971 ] 00:20:52.971 }' 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=670 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:52.971 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.972 "name": "raid_bdev1", 00:20:52.972 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:52.972 "strip_size_kb": 64, 00:20:52.972 "state": "online", 00:20:52.972 "raid_level": "raid5f", 00:20:52.972 "superblock": false, 00:20:52.972 "num_base_bdevs": 4, 00:20:52.972 "num_base_bdevs_discovered": 4, 00:20:52.972 "num_base_bdevs_operational": 4, 00:20:52.972 "process": { 00:20:52.972 "type": "rebuild", 00:20:52.972 "target": "spare", 00:20:52.972 "progress": { 00:20:52.972 "blocks": 21120, 00:20:52.972 "percent": 10 00:20:52.972 } 00:20:52.972 }, 00:20:52.972 "base_bdevs_list": [ 00:20:52.972 { 00:20:52.972 "name": "spare", 00:20:52.972 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:52.972 "is_configured": true, 00:20:52.972 "data_offset": 0, 00:20:52.972 "data_size": 65536 00:20:52.972 }, 00:20:52.972 { 00:20:52.972 "name": "BaseBdev2", 00:20:52.972 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:52.972 "is_configured": true, 00:20:52.972 "data_offset": 0, 00:20:52.972 "data_size": 65536 00:20:52.972 }, 00:20:52.972 { 00:20:52.972 "name": "BaseBdev3", 00:20:52.972 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:52.972 "is_configured": true, 00:20:52.972 "data_offset": 0, 00:20:52.972 "data_size": 65536 00:20:52.972 }, 00:20:52.972 { 00:20:52.972 "name": "BaseBdev4", 00:20:52.972 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:52.972 "is_configured": true, 00:20:52.972 "data_offset": 0, 00:20:52.972 "data_size": 65536 00:20:52.972 } 00:20:52.972 ] 00:20:52.972 }' 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.972 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.230 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.230 11:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.166 "name": "raid_bdev1", 00:20:54.166 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:54.166 "strip_size_kb": 64, 00:20:54.166 "state": "online", 00:20:54.166 "raid_level": "raid5f", 00:20:54.166 "superblock": false, 00:20:54.166 "num_base_bdevs": 4, 00:20:54.166 "num_base_bdevs_discovered": 4, 00:20:54.166 "num_base_bdevs_operational": 4, 00:20:54.166 "process": { 00:20:54.166 "type": "rebuild", 00:20:54.166 "target": "spare", 00:20:54.166 "progress": { 00:20:54.166 "blocks": 42240, 00:20:54.166 "percent": 21 00:20:54.166 } 00:20:54.166 }, 00:20:54.166 "base_bdevs_list": [ 00:20:54.166 { 00:20:54.166 "name": "spare", 00:20:54.166 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:54.166 "is_configured": true, 00:20:54.166 "data_offset": 0, 00:20:54.166 "data_size": 65536 00:20:54.166 }, 00:20:54.166 { 00:20:54.166 "name": "BaseBdev2", 00:20:54.166 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:54.166 "is_configured": true, 00:20:54.166 "data_offset": 0, 00:20:54.166 "data_size": 65536 00:20:54.166 }, 00:20:54.166 { 00:20:54.166 "name": "BaseBdev3", 00:20:54.166 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:54.166 "is_configured": true, 00:20:54.166 "data_offset": 0, 00:20:54.166 "data_size": 65536 00:20:54.166 }, 00:20:54.166 { 00:20:54.166 "name": "BaseBdev4", 00:20:54.166 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:54.166 "is_configured": true, 00:20:54.166 "data_offset": 0, 00:20:54.166 "data_size": 65536 00:20:54.166 } 00:20:54.166 ] 00:20:54.166 }' 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.166 11:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.133 11:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.391 11:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.391 11:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.391 "name": "raid_bdev1", 00:20:55.391 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:55.391 "strip_size_kb": 64, 00:20:55.391 "state": "online", 00:20:55.391 "raid_level": "raid5f", 00:20:55.391 "superblock": false, 00:20:55.391 "num_base_bdevs": 4, 00:20:55.391 "num_base_bdevs_discovered": 4, 00:20:55.391 "num_base_bdevs_operational": 4, 00:20:55.391 "process": { 00:20:55.391 "type": "rebuild", 00:20:55.391 "target": "spare", 00:20:55.391 "progress": { 00:20:55.391 "blocks": 65280, 00:20:55.391 "percent": 33 00:20:55.391 } 00:20:55.391 }, 00:20:55.391 "base_bdevs_list": [ 00:20:55.391 { 00:20:55.391 "name": "spare", 00:20:55.391 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:55.391 "is_configured": true, 00:20:55.391 "data_offset": 0, 00:20:55.391 "data_size": 65536 00:20:55.391 }, 00:20:55.391 { 00:20:55.391 "name": "BaseBdev2", 00:20:55.391 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:55.391 "is_configured": true, 00:20:55.391 "data_offset": 0, 00:20:55.391 "data_size": 65536 00:20:55.391 }, 00:20:55.391 { 00:20:55.391 "name": "BaseBdev3", 00:20:55.391 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:55.391 "is_configured": true, 00:20:55.391 "data_offset": 0, 00:20:55.392 "data_size": 65536 00:20:55.392 }, 00:20:55.392 { 00:20:55.392 "name": "BaseBdev4", 00:20:55.392 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:55.392 "is_configured": true, 00:20:55.392 "data_offset": 0, 00:20:55.392 "data_size": 65536 00:20:55.392 } 00:20:55.392 ] 00:20:55.392 }' 00:20:55.392 11:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.392 11:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.392 11:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.392 11:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.392 11:33:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.328 11:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.587 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.587 "name": "raid_bdev1", 00:20:56.587 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:56.587 "strip_size_kb": 64, 00:20:56.587 "state": "online", 00:20:56.587 "raid_level": "raid5f", 00:20:56.587 "superblock": false, 00:20:56.587 "num_base_bdevs": 4, 00:20:56.587 "num_base_bdevs_discovered": 4, 00:20:56.587 "num_base_bdevs_operational": 4, 00:20:56.587 "process": { 00:20:56.587 "type": "rebuild", 00:20:56.587 "target": "spare", 00:20:56.587 "progress": { 00:20:56.587 "blocks": 86400, 00:20:56.587 "percent": 43 00:20:56.587 } 00:20:56.587 }, 00:20:56.587 "base_bdevs_list": [ 00:20:56.587 { 00:20:56.587 "name": "spare", 00:20:56.587 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:56.587 "is_configured": true, 00:20:56.587 "data_offset": 0, 00:20:56.587 "data_size": 65536 00:20:56.587 }, 00:20:56.587 { 00:20:56.587 "name": "BaseBdev2", 00:20:56.587 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:56.587 "is_configured": true, 00:20:56.587 "data_offset": 0, 00:20:56.587 "data_size": 65536 00:20:56.587 }, 00:20:56.587 { 00:20:56.587 "name": "BaseBdev3", 00:20:56.587 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:56.587 "is_configured": true, 00:20:56.587 "data_offset": 0, 00:20:56.587 "data_size": 65536 00:20:56.587 }, 00:20:56.587 { 00:20:56.587 "name": "BaseBdev4", 00:20:56.587 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:56.587 "is_configured": true, 00:20:56.587 "data_offset": 0, 00:20:56.587 "data_size": 65536 00:20:56.587 } 00:20:56.587 ] 00:20:56.587 }' 00:20:56.587 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.587 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.587 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.587 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.587 11:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.523 "name": "raid_bdev1", 00:20:57.523 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:57.523 "strip_size_kb": 64, 00:20:57.523 "state": "online", 00:20:57.523 "raid_level": "raid5f", 00:20:57.523 "superblock": false, 00:20:57.523 "num_base_bdevs": 4, 00:20:57.523 "num_base_bdevs_discovered": 4, 00:20:57.523 "num_base_bdevs_operational": 4, 00:20:57.523 "process": { 00:20:57.523 "type": "rebuild", 00:20:57.523 "target": "spare", 00:20:57.523 "progress": { 00:20:57.523 "blocks": 109440, 00:20:57.523 "percent": 55 00:20:57.523 } 00:20:57.523 }, 00:20:57.523 "base_bdevs_list": [ 00:20:57.523 { 00:20:57.523 "name": "spare", 00:20:57.523 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:57.523 "is_configured": true, 00:20:57.523 "data_offset": 0, 00:20:57.523 "data_size": 65536 00:20:57.523 }, 00:20:57.523 { 00:20:57.523 "name": "BaseBdev2", 00:20:57.523 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:57.523 "is_configured": true, 00:20:57.523 "data_offset": 0, 00:20:57.523 "data_size": 65536 00:20:57.523 }, 00:20:57.523 { 00:20:57.523 "name": "BaseBdev3", 00:20:57.523 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:57.523 "is_configured": true, 00:20:57.523 "data_offset": 0, 00:20:57.523 "data_size": 65536 00:20:57.523 }, 00:20:57.523 { 00:20:57.523 "name": "BaseBdev4", 00:20:57.523 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:57.523 "is_configured": true, 00:20:57.523 "data_offset": 0, 00:20:57.523 "data_size": 65536 00:20:57.523 } 00:20:57.523 ] 00:20:57.523 }' 00:20:57.523 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.782 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.782 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.782 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.782 11:33:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.721 "name": "raid_bdev1", 00:20:58.721 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:58.721 "strip_size_kb": 64, 00:20:58.721 "state": "online", 00:20:58.721 "raid_level": "raid5f", 00:20:58.721 "superblock": false, 00:20:58.721 "num_base_bdevs": 4, 00:20:58.721 "num_base_bdevs_discovered": 4, 00:20:58.721 "num_base_bdevs_operational": 4, 00:20:58.721 "process": { 00:20:58.721 "type": "rebuild", 00:20:58.721 "target": "spare", 00:20:58.721 "progress": { 00:20:58.721 "blocks": 130560, 00:20:58.721 "percent": 66 00:20:58.721 } 00:20:58.721 }, 00:20:58.721 "base_bdevs_list": [ 00:20:58.721 { 00:20:58.721 "name": "spare", 00:20:58.721 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:58.721 "is_configured": true, 00:20:58.721 "data_offset": 0, 00:20:58.721 "data_size": 65536 00:20:58.721 }, 00:20:58.721 { 00:20:58.721 "name": "BaseBdev2", 00:20:58.721 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:58.721 "is_configured": true, 00:20:58.721 "data_offset": 0, 00:20:58.721 "data_size": 65536 00:20:58.721 }, 00:20:58.721 { 00:20:58.721 "name": "BaseBdev3", 00:20:58.721 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:58.721 "is_configured": true, 00:20:58.721 "data_offset": 0, 00:20:58.721 "data_size": 65536 00:20:58.721 }, 00:20:58.721 { 00:20:58.721 "name": "BaseBdev4", 00:20:58.721 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:58.721 "is_configured": true, 00:20:58.721 "data_offset": 0, 00:20:58.721 "data_size": 65536 00:20:58.721 } 00:20:58.721 ] 00:20:58.721 }' 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.721 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.979 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.979 11:33:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.915 "name": "raid_bdev1", 00:20:59.915 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:20:59.915 "strip_size_kb": 64, 00:20:59.915 "state": "online", 00:20:59.915 "raid_level": "raid5f", 00:20:59.915 "superblock": false, 00:20:59.915 "num_base_bdevs": 4, 00:20:59.915 "num_base_bdevs_discovered": 4, 00:20:59.915 "num_base_bdevs_operational": 4, 00:20:59.915 "process": { 00:20:59.915 "type": "rebuild", 00:20:59.915 "target": "spare", 00:20:59.915 "progress": { 00:20:59.915 "blocks": 151680, 00:20:59.915 "percent": 77 00:20:59.915 } 00:20:59.915 }, 00:20:59.915 "base_bdevs_list": [ 00:20:59.915 { 00:20:59.915 "name": "spare", 00:20:59.915 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:20:59.915 "is_configured": true, 00:20:59.915 "data_offset": 0, 00:20:59.915 "data_size": 65536 00:20:59.915 }, 00:20:59.915 { 00:20:59.915 "name": "BaseBdev2", 00:20:59.915 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:20:59.915 "is_configured": true, 00:20:59.915 "data_offset": 0, 00:20:59.915 "data_size": 65536 00:20:59.915 }, 00:20:59.915 { 00:20:59.915 "name": "BaseBdev3", 00:20:59.915 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:20:59.915 "is_configured": true, 00:20:59.915 "data_offset": 0, 00:20:59.915 "data_size": 65536 00:20:59.915 }, 00:20:59.915 { 00:20:59.915 "name": "BaseBdev4", 00:20:59.915 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:20:59.915 "is_configured": true, 00:20:59.915 "data_offset": 0, 00:20:59.915 "data_size": 65536 00:20:59.915 } 00:20:59.915 ] 00:20:59.915 }' 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.915 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.174 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.174 11:33:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.109 "name": "raid_bdev1", 00:21:01.109 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:21:01.109 "strip_size_kb": 64, 00:21:01.109 "state": "online", 00:21:01.109 "raid_level": "raid5f", 00:21:01.109 "superblock": false, 00:21:01.109 "num_base_bdevs": 4, 00:21:01.109 "num_base_bdevs_discovered": 4, 00:21:01.109 "num_base_bdevs_operational": 4, 00:21:01.109 "process": { 00:21:01.109 "type": "rebuild", 00:21:01.109 "target": "spare", 00:21:01.109 "progress": { 00:21:01.109 "blocks": 174720, 00:21:01.109 "percent": 88 00:21:01.109 } 00:21:01.109 }, 00:21:01.109 "base_bdevs_list": [ 00:21:01.109 { 00:21:01.109 "name": "spare", 00:21:01.109 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:21:01.109 "is_configured": true, 00:21:01.109 "data_offset": 0, 00:21:01.109 "data_size": 65536 00:21:01.109 }, 00:21:01.109 { 00:21:01.109 "name": "BaseBdev2", 00:21:01.109 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:21:01.109 "is_configured": true, 00:21:01.109 "data_offset": 0, 00:21:01.109 "data_size": 65536 00:21:01.109 }, 00:21:01.109 { 00:21:01.109 "name": "BaseBdev3", 00:21:01.109 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:21:01.109 "is_configured": true, 00:21:01.109 "data_offset": 0, 00:21:01.109 "data_size": 65536 00:21:01.109 }, 00:21:01.109 { 00:21:01.109 "name": "BaseBdev4", 00:21:01.109 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:21:01.109 "is_configured": true, 00:21:01.109 "data_offset": 0, 00:21:01.109 "data_size": 65536 00:21:01.109 } 00:21:01.109 ] 00:21:01.109 }' 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.109 11:33:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.485 [2024-11-20 11:33:09.939335] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:02.485 [2024-11-20 11:33:09.939429] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:02.485 [2024-11-20 11:33:09.939494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.485 "name": "raid_bdev1", 00:21:02.485 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:21:02.485 "strip_size_kb": 64, 00:21:02.485 "state": "online", 00:21:02.485 "raid_level": "raid5f", 00:21:02.485 "superblock": false, 00:21:02.485 "num_base_bdevs": 4, 00:21:02.485 "num_base_bdevs_discovered": 4, 00:21:02.485 "num_base_bdevs_operational": 4, 00:21:02.485 "process": { 00:21:02.485 "type": "rebuild", 00:21:02.485 "target": "spare", 00:21:02.485 "progress": { 00:21:02.485 "blocks": 195840, 00:21:02.485 "percent": 99 00:21:02.485 } 00:21:02.485 }, 00:21:02.485 "base_bdevs_list": [ 00:21:02.485 { 00:21:02.485 "name": "spare", 00:21:02.485 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:21:02.485 "is_configured": true, 00:21:02.485 "data_offset": 0, 00:21:02.485 "data_size": 65536 00:21:02.485 }, 00:21:02.485 { 00:21:02.485 "name": "BaseBdev2", 00:21:02.485 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:21:02.485 "is_configured": true, 00:21:02.485 "data_offset": 0, 00:21:02.485 "data_size": 65536 00:21:02.485 }, 00:21:02.485 { 00:21:02.485 "name": "BaseBdev3", 00:21:02.485 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:21:02.485 "is_configured": true, 00:21:02.485 "data_offset": 0, 00:21:02.485 "data_size": 65536 00:21:02.485 }, 00:21:02.485 { 00:21:02.485 "name": "BaseBdev4", 00:21:02.485 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:21:02.485 "is_configured": true, 00:21:02.485 "data_offset": 0, 00:21:02.485 "data_size": 65536 00:21:02.485 } 00:21:02.485 ] 00:21:02.485 }' 00:21:02.485 11:33:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.485 11:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.485 11:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.485 11:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.485 11:33:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.421 "name": "raid_bdev1", 00:21:03.421 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:21:03.421 "strip_size_kb": 64, 00:21:03.421 "state": "online", 00:21:03.421 "raid_level": "raid5f", 00:21:03.421 "superblock": false, 00:21:03.421 "num_base_bdevs": 4, 00:21:03.421 "num_base_bdevs_discovered": 4, 00:21:03.421 "num_base_bdevs_operational": 4, 00:21:03.421 "base_bdevs_list": [ 00:21:03.421 { 00:21:03.421 "name": "spare", 00:21:03.421 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:21:03.421 "is_configured": true, 00:21:03.421 "data_offset": 0, 00:21:03.421 "data_size": 65536 00:21:03.421 }, 00:21:03.421 { 00:21:03.421 "name": "BaseBdev2", 00:21:03.421 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:21:03.421 "is_configured": true, 00:21:03.421 "data_offset": 0, 00:21:03.421 "data_size": 65536 00:21:03.421 }, 00:21:03.421 { 00:21:03.421 "name": "BaseBdev3", 00:21:03.421 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:21:03.421 "is_configured": true, 00:21:03.421 "data_offset": 0, 00:21:03.421 "data_size": 65536 00:21:03.421 }, 00:21:03.421 { 00:21:03.421 "name": "BaseBdev4", 00:21:03.421 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:21:03.421 "is_configured": true, 00:21:03.421 "data_offset": 0, 00:21:03.421 "data_size": 65536 00:21:03.421 } 00:21:03.421 ] 00:21:03.421 }' 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.421 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.680 "name": "raid_bdev1", 00:21:03.680 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:21:03.680 "strip_size_kb": 64, 00:21:03.680 "state": "online", 00:21:03.680 "raid_level": "raid5f", 00:21:03.680 "superblock": false, 00:21:03.680 "num_base_bdevs": 4, 00:21:03.680 "num_base_bdevs_discovered": 4, 00:21:03.680 "num_base_bdevs_operational": 4, 00:21:03.680 "base_bdevs_list": [ 00:21:03.680 { 00:21:03.680 "name": "spare", 00:21:03.680 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:21:03.680 "is_configured": true, 00:21:03.680 "data_offset": 0, 00:21:03.680 "data_size": 65536 00:21:03.680 }, 00:21:03.680 { 00:21:03.680 "name": "BaseBdev2", 00:21:03.680 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:21:03.680 "is_configured": true, 00:21:03.680 "data_offset": 0, 00:21:03.680 "data_size": 65536 00:21:03.680 }, 00:21:03.680 { 00:21:03.680 "name": "BaseBdev3", 00:21:03.680 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:21:03.680 "is_configured": true, 00:21:03.680 "data_offset": 0, 00:21:03.680 "data_size": 65536 00:21:03.680 }, 00:21:03.680 { 00:21:03.680 "name": "BaseBdev4", 00:21:03.680 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:21:03.680 "is_configured": true, 00:21:03.680 "data_offset": 0, 00:21:03.680 "data_size": 65536 00:21:03.680 } 00:21:03.680 ] 00:21:03.680 }' 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.680 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.681 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.681 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.681 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.681 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.681 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.681 "name": "raid_bdev1", 00:21:03.681 "uuid": "c4963b76-6624-494b-ada9-29db549bd2fb", 00:21:03.681 "strip_size_kb": 64, 00:21:03.681 "state": "online", 00:21:03.681 "raid_level": "raid5f", 00:21:03.681 "superblock": false, 00:21:03.681 "num_base_bdevs": 4, 00:21:03.681 "num_base_bdevs_discovered": 4, 00:21:03.681 "num_base_bdevs_operational": 4, 00:21:03.681 "base_bdevs_list": [ 00:21:03.681 { 00:21:03.681 "name": "spare", 00:21:03.681 "uuid": "73a4ff08-6979-56e8-af06-6977f346df99", 00:21:03.681 "is_configured": true, 00:21:03.681 "data_offset": 0, 00:21:03.681 "data_size": 65536 00:21:03.681 }, 00:21:03.681 { 00:21:03.681 "name": "BaseBdev2", 00:21:03.681 "uuid": "60b5ab86-f4c7-5196-9014-8b2739df4eb0", 00:21:03.681 "is_configured": true, 00:21:03.681 "data_offset": 0, 00:21:03.681 "data_size": 65536 00:21:03.681 }, 00:21:03.681 { 00:21:03.681 "name": "BaseBdev3", 00:21:03.681 "uuid": "38761f9d-96e0-5525-9f45-f031685b71cb", 00:21:03.681 "is_configured": true, 00:21:03.681 "data_offset": 0, 00:21:03.681 "data_size": 65536 00:21:03.681 }, 00:21:03.681 { 00:21:03.681 "name": "BaseBdev4", 00:21:03.681 "uuid": "218fe6d7-7e72-5eed-85ec-f95646b9e7c7", 00:21:03.681 "is_configured": true, 00:21:03.681 "data_offset": 0, 00:21:03.681 "data_size": 65536 00:21:03.681 } 00:21:03.681 ] 00:21:03.681 }' 00:21:03.681 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.681 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.248 [2024-11-20 11:33:11.931258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.248 [2024-11-20 11:33:11.931446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.248 [2024-11-20 11:33:11.931571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.248 [2024-11-20 11:33:11.931724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.248 [2024-11-20 11:33:11.931744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:04.248 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:04.249 11:33:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:04.508 /dev/nbd0 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.508 1+0 records in 00:21:04.508 1+0 records out 00:21:04.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395993 s, 10.3 MB/s 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:04.508 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:04.767 /dev/nbd1 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.030 1+0 records in 00:21:05.030 1+0 records out 00:21:05.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383744 s, 10.7 MB/s 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.030 11:33:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.293 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:05.557 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:05.557 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:05.557 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:05.557 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.557 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84919 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84919 ']' 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84919 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.558 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84919 00:21:05.847 killing process with pid 84919 00:21:05.847 Received shutdown signal, test time was about 60.000000 seconds 00:21:05.847 00:21:05.847 Latency(us) 00:21:05.847 [2024-11-20T11:33:13.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.847 [2024-11-20T11:33:13.693Z] =================================================================================================================== 00:21:05.847 [2024-11-20T11:33:13.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.847 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:05.847 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:05.847 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84919' 00:21:05.847 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84919 00:21:05.847 11:33:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84919 00:21:05.847 [2024-11-20 11:33:13.399565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:06.108 [2024-11-20 11:33:13.917435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:07.484 11:33:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:07.484 00:21:07.484 real 0m21.290s 00:21:07.484 user 0m26.380s 00:21:07.484 sys 0m2.169s 00:21:07.484 11:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.484 ************************************ 00:21:07.484 END TEST raid5f_rebuild_test 00:21:07.484 ************************************ 00:21:07.484 11:33:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.484 11:33:15 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:21:07.484 11:33:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:07.484 11:33:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.484 11:33:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:07.484 ************************************ 00:21:07.484 START TEST raid5f_rebuild_test_sb 00:21:07.484 ************************************ 00:21:07.484 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:21:07.484 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:07.484 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:07.484 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:07.484 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85446 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85446 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85446 ']' 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.485 11:33:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.485 [2024-11-20 11:33:15.130232] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:21:07.485 [2024-11-20 11:33:15.130406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85446 ] 00:21:07.485 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:07.485 Zero copy mechanism will not be used. 00:21:07.485 [2024-11-20 11:33:15.316890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.743 [2024-11-20 11:33:15.446614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.002 [2024-11-20 11:33:15.655623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.002 [2024-11-20 11:33:15.655705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.570 BaseBdev1_malloc 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.570 [2024-11-20 11:33:16.164515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:08.570 [2024-11-20 11:33:16.164594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.570 [2024-11-20 11:33:16.164641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:08.570 [2024-11-20 11:33:16.164662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.570 [2024-11-20 11:33:16.168040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.570 [2024-11-20 11:33:16.168109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:08.570 BaseBdev1 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.570 BaseBdev2_malloc 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.570 [2024-11-20 11:33:16.221183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:08.570 [2024-11-20 11:33:16.221257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.570 [2024-11-20 11:33:16.221286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:08.570 [2024-11-20 11:33:16.221306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.570 [2024-11-20 11:33:16.224038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.570 [2024-11-20 11:33:16.224086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:08.570 BaseBdev2 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:08.570 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 BaseBdev3_malloc 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 [2024-11-20 11:33:16.286091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:08.571 [2024-11-20 11:33:16.286182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.571 [2024-11-20 11:33:16.286212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:08.571 [2024-11-20 11:33:16.286231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.571 [2024-11-20 11:33:16.289671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.571 [2024-11-20 11:33:16.289721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:08.571 BaseBdev3 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 BaseBdev4_malloc 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 [2024-11-20 11:33:16.335625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:08.571 [2024-11-20 11:33:16.335691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.571 [2024-11-20 11:33:16.335720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:08.571 [2024-11-20 11:33:16.335738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.571 [2024-11-20 11:33:16.338517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.571 [2024-11-20 11:33:16.338571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:08.571 BaseBdev4 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 spare_malloc 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 spare_delay 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 [2024-11-20 11:33:16.393407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:08.571 [2024-11-20 11:33:16.393520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.571 [2024-11-20 11:33:16.393565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:08.571 [2024-11-20 11:33:16.393595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.571 [2024-11-20 11:33:16.396432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.571 [2024-11-20 11:33:16.396500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:08.571 spare 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.571 [2024-11-20 11:33:16.401469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:08.571 [2024-11-20 11:33:16.403908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:08.571 [2024-11-20 11:33:16.404001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:08.571 [2024-11-20 11:33:16.404086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:08.571 [2024-11-20 11:33:16.404336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:08.571 [2024-11-20 11:33:16.404372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:08.571 [2024-11-20 11:33:16.404702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:08.571 [2024-11-20 11:33:16.411871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:08.571 [2024-11-20 11:33:16.411904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:08.571 [2024-11-20 11:33:16.412152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.571 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.830 "name": "raid_bdev1", 00:21:08.830 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:08.830 "strip_size_kb": 64, 00:21:08.830 "state": "online", 00:21:08.830 "raid_level": "raid5f", 00:21:08.830 "superblock": true, 00:21:08.830 "num_base_bdevs": 4, 00:21:08.830 "num_base_bdevs_discovered": 4, 00:21:08.830 "num_base_bdevs_operational": 4, 00:21:08.830 "base_bdevs_list": [ 00:21:08.830 { 00:21:08.830 "name": "BaseBdev1", 00:21:08.830 "uuid": "b4decafe-d5b8-5073-a33b-8a2e03aa1dc2", 00:21:08.830 "is_configured": true, 00:21:08.830 "data_offset": 2048, 00:21:08.830 "data_size": 63488 00:21:08.830 }, 00:21:08.830 { 00:21:08.830 "name": "BaseBdev2", 00:21:08.830 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:08.830 "is_configured": true, 00:21:08.830 "data_offset": 2048, 00:21:08.830 "data_size": 63488 00:21:08.830 }, 00:21:08.830 { 00:21:08.830 "name": "BaseBdev3", 00:21:08.830 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:08.830 "is_configured": true, 00:21:08.830 "data_offset": 2048, 00:21:08.830 "data_size": 63488 00:21:08.830 }, 00:21:08.830 { 00:21:08.830 "name": "BaseBdev4", 00:21:08.830 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:08.830 "is_configured": true, 00:21:08.830 "data_offset": 2048, 00:21:08.830 "data_size": 63488 00:21:08.830 } 00:21:08.830 ] 00:21:08.830 }' 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.830 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.090 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:09.090 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:09.090 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.090 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.090 [2024-11-20 11:33:16.919837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:09.349 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:09.350 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:09.350 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:09.350 11:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:09.626 [2024-11-20 11:33:17.279728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:09.626 /dev/nbd0 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:09.626 1+0 records in 00:21:09.626 1+0 records out 00:21:09.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407076 s, 10.1 MB/s 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:09.626 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:21:10.203 496+0 records in 00:21:10.203 496+0 records out 00:21:10.203 97517568 bytes (98 MB, 93 MiB) copied, 0.570032 s, 171 MB/s 00:21:10.203 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:10.203 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:10.203 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:10.203 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:10.203 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:10.203 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:10.203 11:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:10.461 [2024-11-20 11:33:18.183414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.461 [2024-11-20 11:33:18.195080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.461 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.461 "name": "raid_bdev1", 00:21:10.461 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:10.461 "strip_size_kb": 64, 00:21:10.461 "state": "online", 00:21:10.461 "raid_level": "raid5f", 00:21:10.461 "superblock": true, 00:21:10.461 "num_base_bdevs": 4, 00:21:10.461 "num_base_bdevs_discovered": 3, 00:21:10.461 "num_base_bdevs_operational": 3, 00:21:10.461 "base_bdevs_list": [ 00:21:10.461 { 00:21:10.461 "name": null, 00:21:10.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.462 "is_configured": false, 00:21:10.462 "data_offset": 0, 00:21:10.462 "data_size": 63488 00:21:10.462 }, 00:21:10.462 { 00:21:10.462 "name": "BaseBdev2", 00:21:10.462 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:10.462 "is_configured": true, 00:21:10.462 "data_offset": 2048, 00:21:10.462 "data_size": 63488 00:21:10.462 }, 00:21:10.462 { 00:21:10.462 "name": "BaseBdev3", 00:21:10.462 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:10.462 "is_configured": true, 00:21:10.462 "data_offset": 2048, 00:21:10.462 "data_size": 63488 00:21:10.462 }, 00:21:10.462 { 00:21:10.462 "name": "BaseBdev4", 00:21:10.462 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:10.462 "is_configured": true, 00:21:10.462 "data_offset": 2048, 00:21:10.462 "data_size": 63488 00:21:10.462 } 00:21:10.462 ] 00:21:10.462 }' 00:21:10.462 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.462 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.029 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:11.029 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.029 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.029 [2024-11-20 11:33:18.767248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:11.029 [2024-11-20 11:33:18.782161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:21:11.029 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.029 11:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:11.029 [2024-11-20 11:33:18.791523] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.964 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.224 "name": "raid_bdev1", 00:21:12.224 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:12.224 "strip_size_kb": 64, 00:21:12.224 "state": "online", 00:21:12.224 "raid_level": "raid5f", 00:21:12.224 "superblock": true, 00:21:12.224 "num_base_bdevs": 4, 00:21:12.224 "num_base_bdevs_discovered": 4, 00:21:12.224 "num_base_bdevs_operational": 4, 00:21:12.224 "process": { 00:21:12.224 "type": "rebuild", 00:21:12.224 "target": "spare", 00:21:12.224 "progress": { 00:21:12.224 "blocks": 17280, 00:21:12.224 "percent": 9 00:21:12.224 } 00:21:12.224 }, 00:21:12.224 "base_bdevs_list": [ 00:21:12.224 { 00:21:12.224 "name": "spare", 00:21:12.224 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:12.224 "is_configured": true, 00:21:12.224 "data_offset": 2048, 00:21:12.224 "data_size": 63488 00:21:12.224 }, 00:21:12.224 { 00:21:12.224 "name": "BaseBdev2", 00:21:12.224 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:12.224 "is_configured": true, 00:21:12.224 "data_offset": 2048, 00:21:12.224 "data_size": 63488 00:21:12.224 }, 00:21:12.224 { 00:21:12.224 "name": "BaseBdev3", 00:21:12.224 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:12.224 "is_configured": true, 00:21:12.224 "data_offset": 2048, 00:21:12.224 "data_size": 63488 00:21:12.224 }, 00:21:12.224 { 00:21:12.224 "name": "BaseBdev4", 00:21:12.224 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:12.224 "is_configured": true, 00:21:12.224 "data_offset": 2048, 00:21:12.224 "data_size": 63488 00:21:12.224 } 00:21:12.224 ] 00:21:12.224 }' 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.224 11:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.224 [2024-11-20 11:33:19.957640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:12.224 [2024-11-20 11:33:20.005011] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:12.224 [2024-11-20 11:33:20.005133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.224 [2024-11-20 11:33:20.005161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:12.224 [2024-11-20 11:33:20.005176] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.224 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.483 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.483 "name": "raid_bdev1", 00:21:12.483 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:12.483 "strip_size_kb": 64, 00:21:12.483 "state": "online", 00:21:12.483 "raid_level": "raid5f", 00:21:12.483 "superblock": true, 00:21:12.483 "num_base_bdevs": 4, 00:21:12.483 "num_base_bdevs_discovered": 3, 00:21:12.483 "num_base_bdevs_operational": 3, 00:21:12.483 "base_bdevs_list": [ 00:21:12.483 { 00:21:12.483 "name": null, 00:21:12.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.483 "is_configured": false, 00:21:12.483 "data_offset": 0, 00:21:12.483 "data_size": 63488 00:21:12.483 }, 00:21:12.483 { 00:21:12.483 "name": "BaseBdev2", 00:21:12.483 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:12.483 "is_configured": true, 00:21:12.483 "data_offset": 2048, 00:21:12.483 "data_size": 63488 00:21:12.483 }, 00:21:12.483 { 00:21:12.483 "name": "BaseBdev3", 00:21:12.483 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:12.483 "is_configured": true, 00:21:12.483 "data_offset": 2048, 00:21:12.483 "data_size": 63488 00:21:12.483 }, 00:21:12.483 { 00:21:12.483 "name": "BaseBdev4", 00:21:12.483 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:12.483 "is_configured": true, 00:21:12.483 "data_offset": 2048, 00:21:12.483 "data_size": 63488 00:21:12.483 } 00:21:12.483 ] 00:21:12.483 }' 00:21:12.483 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.483 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.742 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.001 "name": "raid_bdev1", 00:21:13.001 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:13.001 "strip_size_kb": 64, 00:21:13.001 "state": "online", 00:21:13.001 "raid_level": "raid5f", 00:21:13.001 "superblock": true, 00:21:13.001 "num_base_bdevs": 4, 00:21:13.001 "num_base_bdevs_discovered": 3, 00:21:13.001 "num_base_bdevs_operational": 3, 00:21:13.001 "base_bdevs_list": [ 00:21:13.001 { 00:21:13.001 "name": null, 00:21:13.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.001 "is_configured": false, 00:21:13.001 "data_offset": 0, 00:21:13.001 "data_size": 63488 00:21:13.001 }, 00:21:13.001 { 00:21:13.001 "name": "BaseBdev2", 00:21:13.001 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:13.001 "is_configured": true, 00:21:13.001 "data_offset": 2048, 00:21:13.001 "data_size": 63488 00:21:13.001 }, 00:21:13.001 { 00:21:13.001 "name": "BaseBdev3", 00:21:13.001 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:13.001 "is_configured": true, 00:21:13.001 "data_offset": 2048, 00:21:13.001 "data_size": 63488 00:21:13.001 }, 00:21:13.001 { 00:21:13.001 "name": "BaseBdev4", 00:21:13.001 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:13.001 "is_configured": true, 00:21:13.001 "data_offset": 2048, 00:21:13.001 "data_size": 63488 00:21:13.001 } 00:21:13.001 ] 00:21:13.001 }' 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.001 [2024-11-20 11:33:20.717394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:13.001 [2024-11-20 11:33:20.731905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.001 11:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:13.001 [2024-11-20 11:33:20.741263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.944 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.945 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.204 "name": "raid_bdev1", 00:21:14.204 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:14.204 "strip_size_kb": 64, 00:21:14.204 "state": "online", 00:21:14.204 "raid_level": "raid5f", 00:21:14.204 "superblock": true, 00:21:14.204 "num_base_bdevs": 4, 00:21:14.204 "num_base_bdevs_discovered": 4, 00:21:14.204 "num_base_bdevs_operational": 4, 00:21:14.204 "process": { 00:21:14.204 "type": "rebuild", 00:21:14.204 "target": "spare", 00:21:14.204 "progress": { 00:21:14.204 "blocks": 17280, 00:21:14.204 "percent": 9 00:21:14.204 } 00:21:14.204 }, 00:21:14.204 "base_bdevs_list": [ 00:21:14.204 { 00:21:14.204 "name": "spare", 00:21:14.204 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:14.204 "is_configured": true, 00:21:14.204 "data_offset": 2048, 00:21:14.204 "data_size": 63488 00:21:14.204 }, 00:21:14.204 { 00:21:14.204 "name": "BaseBdev2", 00:21:14.204 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:14.204 "is_configured": true, 00:21:14.204 "data_offset": 2048, 00:21:14.204 "data_size": 63488 00:21:14.204 }, 00:21:14.204 { 00:21:14.204 "name": "BaseBdev3", 00:21:14.204 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:14.204 "is_configured": true, 00:21:14.204 "data_offset": 2048, 00:21:14.204 "data_size": 63488 00:21:14.204 }, 00:21:14.204 { 00:21:14.204 "name": "BaseBdev4", 00:21:14.204 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:14.204 "is_configured": true, 00:21:14.204 "data_offset": 2048, 00:21:14.204 "data_size": 63488 00:21:14.204 } 00:21:14.204 ] 00:21:14.204 }' 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:14.204 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=691 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.204 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.205 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.205 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.205 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.205 "name": "raid_bdev1", 00:21:14.205 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:14.205 "strip_size_kb": 64, 00:21:14.205 "state": "online", 00:21:14.205 "raid_level": "raid5f", 00:21:14.205 "superblock": true, 00:21:14.205 "num_base_bdevs": 4, 00:21:14.205 "num_base_bdevs_discovered": 4, 00:21:14.205 "num_base_bdevs_operational": 4, 00:21:14.205 "process": { 00:21:14.205 "type": "rebuild", 00:21:14.205 "target": "spare", 00:21:14.205 "progress": { 00:21:14.205 "blocks": 21120, 00:21:14.205 "percent": 11 00:21:14.205 } 00:21:14.205 }, 00:21:14.205 "base_bdevs_list": [ 00:21:14.205 { 00:21:14.205 "name": "spare", 00:21:14.205 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:14.205 "is_configured": true, 00:21:14.205 "data_offset": 2048, 00:21:14.205 "data_size": 63488 00:21:14.205 }, 00:21:14.205 { 00:21:14.205 "name": "BaseBdev2", 00:21:14.205 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:14.205 "is_configured": true, 00:21:14.205 "data_offset": 2048, 00:21:14.205 "data_size": 63488 00:21:14.205 }, 00:21:14.205 { 00:21:14.205 "name": "BaseBdev3", 00:21:14.205 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:14.205 "is_configured": true, 00:21:14.205 "data_offset": 2048, 00:21:14.205 "data_size": 63488 00:21:14.205 }, 00:21:14.205 { 00:21:14.205 "name": "BaseBdev4", 00:21:14.205 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:14.205 "is_configured": true, 00:21:14.205 "data_offset": 2048, 00:21:14.205 "data_size": 63488 00:21:14.205 } 00:21:14.205 ] 00:21:14.205 }' 00:21:14.205 11:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.205 11:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.205 11:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.464 11:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.464 11:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.400 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.400 "name": "raid_bdev1", 00:21:15.400 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:15.400 "strip_size_kb": 64, 00:21:15.400 "state": "online", 00:21:15.400 "raid_level": "raid5f", 00:21:15.400 "superblock": true, 00:21:15.400 "num_base_bdevs": 4, 00:21:15.400 "num_base_bdevs_discovered": 4, 00:21:15.400 "num_base_bdevs_operational": 4, 00:21:15.400 "process": { 00:21:15.400 "type": "rebuild", 00:21:15.400 "target": "spare", 00:21:15.400 "progress": { 00:21:15.400 "blocks": 44160, 00:21:15.400 "percent": 23 00:21:15.400 } 00:21:15.400 }, 00:21:15.400 "base_bdevs_list": [ 00:21:15.400 { 00:21:15.400 "name": "spare", 00:21:15.401 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:15.401 "is_configured": true, 00:21:15.401 "data_offset": 2048, 00:21:15.401 "data_size": 63488 00:21:15.401 }, 00:21:15.401 { 00:21:15.401 "name": "BaseBdev2", 00:21:15.401 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:15.401 "is_configured": true, 00:21:15.401 "data_offset": 2048, 00:21:15.401 "data_size": 63488 00:21:15.401 }, 00:21:15.401 { 00:21:15.401 "name": "BaseBdev3", 00:21:15.401 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:15.401 "is_configured": true, 00:21:15.401 "data_offset": 2048, 00:21:15.401 "data_size": 63488 00:21:15.401 }, 00:21:15.401 { 00:21:15.401 "name": "BaseBdev4", 00:21:15.401 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:15.401 "is_configured": true, 00:21:15.401 "data_offset": 2048, 00:21:15.401 "data_size": 63488 00:21:15.401 } 00:21:15.401 ] 00:21:15.401 }' 00:21:15.401 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.401 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.401 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.401 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.401 11:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.776 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.776 "name": "raid_bdev1", 00:21:16.776 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:16.776 "strip_size_kb": 64, 00:21:16.776 "state": "online", 00:21:16.776 "raid_level": "raid5f", 00:21:16.776 "superblock": true, 00:21:16.776 "num_base_bdevs": 4, 00:21:16.776 "num_base_bdevs_discovered": 4, 00:21:16.776 "num_base_bdevs_operational": 4, 00:21:16.776 "process": { 00:21:16.776 "type": "rebuild", 00:21:16.776 "target": "spare", 00:21:16.776 "progress": { 00:21:16.776 "blocks": 65280, 00:21:16.776 "percent": 34 00:21:16.776 } 00:21:16.776 }, 00:21:16.776 "base_bdevs_list": [ 00:21:16.776 { 00:21:16.776 "name": "spare", 00:21:16.776 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:16.776 "is_configured": true, 00:21:16.776 "data_offset": 2048, 00:21:16.776 "data_size": 63488 00:21:16.776 }, 00:21:16.776 { 00:21:16.776 "name": "BaseBdev2", 00:21:16.776 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:16.776 "is_configured": true, 00:21:16.776 "data_offset": 2048, 00:21:16.776 "data_size": 63488 00:21:16.776 }, 00:21:16.776 { 00:21:16.776 "name": "BaseBdev3", 00:21:16.776 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:16.776 "is_configured": true, 00:21:16.777 "data_offset": 2048, 00:21:16.777 "data_size": 63488 00:21:16.777 }, 00:21:16.777 { 00:21:16.777 "name": "BaseBdev4", 00:21:16.777 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:16.777 "is_configured": true, 00:21:16.777 "data_offset": 2048, 00:21:16.777 "data_size": 63488 00:21:16.777 } 00:21:16.777 ] 00:21:16.777 }' 00:21:16.777 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.777 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.777 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.777 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.777 11:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.814 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.814 "name": "raid_bdev1", 00:21:17.814 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:17.814 "strip_size_kb": 64, 00:21:17.814 "state": "online", 00:21:17.814 "raid_level": "raid5f", 00:21:17.814 "superblock": true, 00:21:17.814 "num_base_bdevs": 4, 00:21:17.814 "num_base_bdevs_discovered": 4, 00:21:17.814 "num_base_bdevs_operational": 4, 00:21:17.814 "process": { 00:21:17.814 "type": "rebuild", 00:21:17.814 "target": "spare", 00:21:17.814 "progress": { 00:21:17.814 "blocks": 88320, 00:21:17.815 "percent": 46 00:21:17.815 } 00:21:17.815 }, 00:21:17.815 "base_bdevs_list": [ 00:21:17.815 { 00:21:17.815 "name": "spare", 00:21:17.815 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:17.815 "is_configured": true, 00:21:17.815 "data_offset": 2048, 00:21:17.815 "data_size": 63488 00:21:17.815 }, 00:21:17.815 { 00:21:17.815 "name": "BaseBdev2", 00:21:17.815 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:17.815 "is_configured": true, 00:21:17.815 "data_offset": 2048, 00:21:17.815 "data_size": 63488 00:21:17.815 }, 00:21:17.815 { 00:21:17.815 "name": "BaseBdev3", 00:21:17.815 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:17.815 "is_configured": true, 00:21:17.815 "data_offset": 2048, 00:21:17.815 "data_size": 63488 00:21:17.815 }, 00:21:17.815 { 00:21:17.815 "name": "BaseBdev4", 00:21:17.815 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:17.815 "is_configured": true, 00:21:17.815 "data_offset": 2048, 00:21:17.815 "data_size": 63488 00:21:17.815 } 00:21:17.815 ] 00:21:17.815 }' 00:21:17.815 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.815 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.815 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.815 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.815 11:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.751 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.010 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.010 "name": "raid_bdev1", 00:21:19.010 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:19.010 "strip_size_kb": 64, 00:21:19.010 "state": "online", 00:21:19.010 "raid_level": "raid5f", 00:21:19.010 "superblock": true, 00:21:19.010 "num_base_bdevs": 4, 00:21:19.010 "num_base_bdevs_discovered": 4, 00:21:19.010 "num_base_bdevs_operational": 4, 00:21:19.010 "process": { 00:21:19.010 "type": "rebuild", 00:21:19.010 "target": "spare", 00:21:19.010 "progress": { 00:21:19.010 "blocks": 109440, 00:21:19.010 "percent": 57 00:21:19.010 } 00:21:19.010 }, 00:21:19.010 "base_bdevs_list": [ 00:21:19.010 { 00:21:19.010 "name": "spare", 00:21:19.010 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:19.010 "is_configured": true, 00:21:19.010 "data_offset": 2048, 00:21:19.010 "data_size": 63488 00:21:19.010 }, 00:21:19.010 { 00:21:19.010 "name": "BaseBdev2", 00:21:19.010 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:19.010 "is_configured": true, 00:21:19.010 "data_offset": 2048, 00:21:19.010 "data_size": 63488 00:21:19.010 }, 00:21:19.010 { 00:21:19.010 "name": "BaseBdev3", 00:21:19.010 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:19.010 "is_configured": true, 00:21:19.010 "data_offset": 2048, 00:21:19.010 "data_size": 63488 00:21:19.010 }, 00:21:19.010 { 00:21:19.010 "name": "BaseBdev4", 00:21:19.010 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:19.010 "is_configured": true, 00:21:19.010 "data_offset": 2048, 00:21:19.010 "data_size": 63488 00:21:19.010 } 00:21:19.010 ] 00:21:19.010 }' 00:21:19.010 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.010 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.010 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.010 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.010 11:33:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.947 "name": "raid_bdev1", 00:21:19.947 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:19.947 "strip_size_kb": 64, 00:21:19.947 "state": "online", 00:21:19.947 "raid_level": "raid5f", 00:21:19.947 "superblock": true, 00:21:19.947 "num_base_bdevs": 4, 00:21:19.947 "num_base_bdevs_discovered": 4, 00:21:19.947 "num_base_bdevs_operational": 4, 00:21:19.947 "process": { 00:21:19.947 "type": "rebuild", 00:21:19.947 "target": "spare", 00:21:19.947 "progress": { 00:21:19.947 "blocks": 132480, 00:21:19.947 "percent": 69 00:21:19.947 } 00:21:19.947 }, 00:21:19.947 "base_bdevs_list": [ 00:21:19.947 { 00:21:19.947 "name": "spare", 00:21:19.947 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:19.947 "is_configured": true, 00:21:19.947 "data_offset": 2048, 00:21:19.947 "data_size": 63488 00:21:19.947 }, 00:21:19.947 { 00:21:19.947 "name": "BaseBdev2", 00:21:19.947 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:19.947 "is_configured": true, 00:21:19.947 "data_offset": 2048, 00:21:19.947 "data_size": 63488 00:21:19.947 }, 00:21:19.947 { 00:21:19.947 "name": "BaseBdev3", 00:21:19.947 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:19.947 "is_configured": true, 00:21:19.947 "data_offset": 2048, 00:21:19.947 "data_size": 63488 00:21:19.947 }, 00:21:19.947 { 00:21:19.947 "name": "BaseBdev4", 00:21:19.947 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:19.947 "is_configured": true, 00:21:19.947 "data_offset": 2048, 00:21:19.947 "data_size": 63488 00:21:19.947 } 00:21:19.947 ] 00:21:19.947 }' 00:21:19.947 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.206 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.206 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.206 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.206 11:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.143 "name": "raid_bdev1", 00:21:21.143 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:21.143 "strip_size_kb": 64, 00:21:21.143 "state": "online", 00:21:21.143 "raid_level": "raid5f", 00:21:21.143 "superblock": true, 00:21:21.143 "num_base_bdevs": 4, 00:21:21.143 "num_base_bdevs_discovered": 4, 00:21:21.143 "num_base_bdevs_operational": 4, 00:21:21.143 "process": { 00:21:21.143 "type": "rebuild", 00:21:21.143 "target": "spare", 00:21:21.143 "progress": { 00:21:21.143 "blocks": 153600, 00:21:21.143 "percent": 80 00:21:21.143 } 00:21:21.143 }, 00:21:21.143 "base_bdevs_list": [ 00:21:21.143 { 00:21:21.143 "name": "spare", 00:21:21.143 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:21.143 "is_configured": true, 00:21:21.143 "data_offset": 2048, 00:21:21.143 "data_size": 63488 00:21:21.143 }, 00:21:21.143 { 00:21:21.143 "name": "BaseBdev2", 00:21:21.143 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:21.143 "is_configured": true, 00:21:21.143 "data_offset": 2048, 00:21:21.143 "data_size": 63488 00:21:21.143 }, 00:21:21.143 { 00:21:21.143 "name": "BaseBdev3", 00:21:21.143 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:21.143 "is_configured": true, 00:21:21.143 "data_offset": 2048, 00:21:21.143 "data_size": 63488 00:21:21.143 }, 00:21:21.143 { 00:21:21.143 "name": "BaseBdev4", 00:21:21.143 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:21.143 "is_configured": true, 00:21:21.143 "data_offset": 2048, 00:21:21.143 "data_size": 63488 00:21:21.143 } 00:21:21.143 ] 00:21:21.143 }' 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.143 11:33:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.401 11:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.401 11:33:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.339 "name": "raid_bdev1", 00:21:22.339 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:22.339 "strip_size_kb": 64, 00:21:22.339 "state": "online", 00:21:22.339 "raid_level": "raid5f", 00:21:22.339 "superblock": true, 00:21:22.339 "num_base_bdevs": 4, 00:21:22.339 "num_base_bdevs_discovered": 4, 00:21:22.339 "num_base_bdevs_operational": 4, 00:21:22.339 "process": { 00:21:22.339 "type": "rebuild", 00:21:22.339 "target": "spare", 00:21:22.339 "progress": { 00:21:22.339 "blocks": 174720, 00:21:22.339 "percent": 91 00:21:22.339 } 00:21:22.339 }, 00:21:22.339 "base_bdevs_list": [ 00:21:22.339 { 00:21:22.339 "name": "spare", 00:21:22.339 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:22.339 "is_configured": true, 00:21:22.339 "data_offset": 2048, 00:21:22.339 "data_size": 63488 00:21:22.339 }, 00:21:22.339 { 00:21:22.339 "name": "BaseBdev2", 00:21:22.339 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:22.339 "is_configured": true, 00:21:22.339 "data_offset": 2048, 00:21:22.339 "data_size": 63488 00:21:22.339 }, 00:21:22.339 { 00:21:22.339 "name": "BaseBdev3", 00:21:22.339 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:22.339 "is_configured": true, 00:21:22.339 "data_offset": 2048, 00:21:22.339 "data_size": 63488 00:21:22.339 }, 00:21:22.339 { 00:21:22.339 "name": "BaseBdev4", 00:21:22.339 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:22.339 "is_configured": true, 00:21:22.339 "data_offset": 2048, 00:21:22.339 "data_size": 63488 00:21:22.339 } 00:21:22.339 ] 00:21:22.339 }' 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.339 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.598 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.598 11:33:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:23.304 [2024-11-20 11:33:30.849168] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:23.304 [2024-11-20 11:33:30.849275] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:23.304 [2024-11-20 11:33:30.849552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.588 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.588 "name": "raid_bdev1", 00:21:23.588 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:23.588 "strip_size_kb": 64, 00:21:23.588 "state": "online", 00:21:23.588 "raid_level": "raid5f", 00:21:23.588 "superblock": true, 00:21:23.588 "num_base_bdevs": 4, 00:21:23.588 "num_base_bdevs_discovered": 4, 00:21:23.588 "num_base_bdevs_operational": 4, 00:21:23.588 "base_bdevs_list": [ 00:21:23.588 { 00:21:23.588 "name": "spare", 00:21:23.588 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:23.588 "is_configured": true, 00:21:23.588 "data_offset": 2048, 00:21:23.588 "data_size": 63488 00:21:23.588 }, 00:21:23.589 { 00:21:23.589 "name": "BaseBdev2", 00:21:23.589 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:23.589 "is_configured": true, 00:21:23.589 "data_offset": 2048, 00:21:23.589 "data_size": 63488 00:21:23.589 }, 00:21:23.589 { 00:21:23.589 "name": "BaseBdev3", 00:21:23.589 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:23.589 "is_configured": true, 00:21:23.589 "data_offset": 2048, 00:21:23.589 "data_size": 63488 00:21:23.589 }, 00:21:23.589 { 00:21:23.589 "name": "BaseBdev4", 00:21:23.589 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:23.589 "is_configured": true, 00:21:23.589 "data_offset": 2048, 00:21:23.589 "data_size": 63488 00:21:23.589 } 00:21:23.589 ] 00:21:23.589 }' 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.589 "name": "raid_bdev1", 00:21:23.589 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:23.589 "strip_size_kb": 64, 00:21:23.589 "state": "online", 00:21:23.589 "raid_level": "raid5f", 00:21:23.589 "superblock": true, 00:21:23.589 "num_base_bdevs": 4, 00:21:23.589 "num_base_bdevs_discovered": 4, 00:21:23.589 "num_base_bdevs_operational": 4, 00:21:23.589 "base_bdevs_list": [ 00:21:23.589 { 00:21:23.589 "name": "spare", 00:21:23.589 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:23.589 "is_configured": true, 00:21:23.589 "data_offset": 2048, 00:21:23.589 "data_size": 63488 00:21:23.589 }, 00:21:23.589 { 00:21:23.589 "name": "BaseBdev2", 00:21:23.589 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:23.589 "is_configured": true, 00:21:23.589 "data_offset": 2048, 00:21:23.589 "data_size": 63488 00:21:23.589 }, 00:21:23.589 { 00:21:23.589 "name": "BaseBdev3", 00:21:23.589 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:23.589 "is_configured": true, 00:21:23.589 "data_offset": 2048, 00:21:23.589 "data_size": 63488 00:21:23.589 }, 00:21:23.589 { 00:21:23.589 "name": "BaseBdev4", 00:21:23.589 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:23.589 "is_configured": true, 00:21:23.589 "data_offset": 2048, 00:21:23.589 "data_size": 63488 00:21:23.589 } 00:21:23.589 ] 00:21:23.589 }' 00:21:23.589 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.849 "name": "raid_bdev1", 00:21:23.849 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:23.849 "strip_size_kb": 64, 00:21:23.849 "state": "online", 00:21:23.849 "raid_level": "raid5f", 00:21:23.849 "superblock": true, 00:21:23.849 "num_base_bdevs": 4, 00:21:23.849 "num_base_bdevs_discovered": 4, 00:21:23.849 "num_base_bdevs_operational": 4, 00:21:23.849 "base_bdevs_list": [ 00:21:23.849 { 00:21:23.849 "name": "spare", 00:21:23.849 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:23.849 "is_configured": true, 00:21:23.849 "data_offset": 2048, 00:21:23.849 "data_size": 63488 00:21:23.849 }, 00:21:23.849 { 00:21:23.849 "name": "BaseBdev2", 00:21:23.849 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:23.849 "is_configured": true, 00:21:23.849 "data_offset": 2048, 00:21:23.849 "data_size": 63488 00:21:23.849 }, 00:21:23.849 { 00:21:23.849 "name": "BaseBdev3", 00:21:23.849 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:23.849 "is_configured": true, 00:21:23.849 "data_offset": 2048, 00:21:23.849 "data_size": 63488 00:21:23.849 }, 00:21:23.849 { 00:21:23.849 "name": "BaseBdev4", 00:21:23.849 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:23.849 "is_configured": true, 00:21:23.849 "data_offset": 2048, 00:21:23.849 "data_size": 63488 00:21:23.849 } 00:21:23.849 ] 00:21:23.849 }' 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.849 11:33:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.418 [2024-11-20 11:33:32.053322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.418 [2024-11-20 11:33:32.053361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.418 [2024-11-20 11:33:32.053459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.418 [2024-11-20 11:33:32.053592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.418 [2024-11-20 11:33:32.053649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:24.418 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:24.677 /dev/nbd0 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:24.677 1+0 records in 00:21:24.677 1+0 records out 00:21:24.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299972 s, 13.7 MB/s 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.677 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:24.678 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.678 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:24.678 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:24.678 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:24.678 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:24.678 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:24.936 /dev/nbd1 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.195 1+0 records in 00:21:25.195 1+0 records out 00:21:25.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420873 s, 9.7 MB/s 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.195 11:33:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.763 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.022 [2024-11-20 11:33:33.689223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:26.022 [2024-11-20 11:33:33.689302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.022 [2024-11-20 11:33:33.689343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:26.022 [2024-11-20 11:33:33.689358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.022 [2024-11-20 11:33:33.692357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.022 [2024-11-20 11:33:33.692405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:26.022 [2024-11-20 11:33:33.692531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:26.022 [2024-11-20 11:33:33.692599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.022 [2024-11-20 11:33:33.692811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.022 [2024-11-20 11:33:33.692961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.022 [2024-11-20 11:33:33.693093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:26.022 spare 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.022 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.023 [2024-11-20 11:33:33.793239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:26.023 [2024-11-20 11:33:33.793314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:26.023 [2024-11-20 11:33:33.793765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:21:26.023 [2024-11-20 11:33:33.800135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:26.023 [2024-11-20 11:33:33.800165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:26.023 [2024-11-20 11:33:33.800437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.023 "name": "raid_bdev1", 00:21:26.023 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:26.023 "strip_size_kb": 64, 00:21:26.023 "state": "online", 00:21:26.023 "raid_level": "raid5f", 00:21:26.023 "superblock": true, 00:21:26.023 "num_base_bdevs": 4, 00:21:26.023 "num_base_bdevs_discovered": 4, 00:21:26.023 "num_base_bdevs_operational": 4, 00:21:26.023 "base_bdevs_list": [ 00:21:26.023 { 00:21:26.023 "name": "spare", 00:21:26.023 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:26.023 "is_configured": true, 00:21:26.023 "data_offset": 2048, 00:21:26.023 "data_size": 63488 00:21:26.023 }, 00:21:26.023 { 00:21:26.023 "name": "BaseBdev2", 00:21:26.023 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:26.023 "is_configured": true, 00:21:26.023 "data_offset": 2048, 00:21:26.023 "data_size": 63488 00:21:26.023 }, 00:21:26.023 { 00:21:26.023 "name": "BaseBdev3", 00:21:26.023 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:26.023 "is_configured": true, 00:21:26.023 "data_offset": 2048, 00:21:26.023 "data_size": 63488 00:21:26.023 }, 00:21:26.023 { 00:21:26.023 "name": "BaseBdev4", 00:21:26.023 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:26.023 "is_configured": true, 00:21:26.023 "data_offset": 2048, 00:21:26.023 "data_size": 63488 00:21:26.023 } 00:21:26.023 ] 00:21:26.023 }' 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.023 11:33:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:26.590 "name": "raid_bdev1", 00:21:26.590 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:26.590 "strip_size_kb": 64, 00:21:26.590 "state": "online", 00:21:26.590 "raid_level": "raid5f", 00:21:26.590 "superblock": true, 00:21:26.590 "num_base_bdevs": 4, 00:21:26.590 "num_base_bdevs_discovered": 4, 00:21:26.590 "num_base_bdevs_operational": 4, 00:21:26.590 "base_bdevs_list": [ 00:21:26.590 { 00:21:26.590 "name": "spare", 00:21:26.590 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:26.590 "is_configured": true, 00:21:26.590 "data_offset": 2048, 00:21:26.590 "data_size": 63488 00:21:26.590 }, 00:21:26.590 { 00:21:26.590 "name": "BaseBdev2", 00:21:26.590 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:26.590 "is_configured": true, 00:21:26.590 "data_offset": 2048, 00:21:26.590 "data_size": 63488 00:21:26.590 }, 00:21:26.590 { 00:21:26.590 "name": "BaseBdev3", 00:21:26.590 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:26.590 "is_configured": true, 00:21:26.590 "data_offset": 2048, 00:21:26.590 "data_size": 63488 00:21:26.590 }, 00:21:26.590 { 00:21:26.590 "name": "BaseBdev4", 00:21:26.590 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:26.590 "is_configured": true, 00:21:26.590 "data_offset": 2048, 00:21:26.590 "data_size": 63488 00:21:26.590 } 00:21:26.590 ] 00:21:26.590 }' 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:26.590 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.848 [2024-11-20 11:33:34.512079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.848 "name": "raid_bdev1", 00:21:26.848 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:26.848 "strip_size_kb": 64, 00:21:26.848 "state": "online", 00:21:26.848 "raid_level": "raid5f", 00:21:26.848 "superblock": true, 00:21:26.848 "num_base_bdevs": 4, 00:21:26.848 "num_base_bdevs_discovered": 3, 00:21:26.848 "num_base_bdevs_operational": 3, 00:21:26.848 "base_bdevs_list": [ 00:21:26.848 { 00:21:26.848 "name": null, 00:21:26.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.848 "is_configured": false, 00:21:26.848 "data_offset": 0, 00:21:26.848 "data_size": 63488 00:21:26.848 }, 00:21:26.848 { 00:21:26.848 "name": "BaseBdev2", 00:21:26.848 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:26.848 "is_configured": true, 00:21:26.848 "data_offset": 2048, 00:21:26.848 "data_size": 63488 00:21:26.848 }, 00:21:26.848 { 00:21:26.848 "name": "BaseBdev3", 00:21:26.848 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:26.848 "is_configured": true, 00:21:26.848 "data_offset": 2048, 00:21:26.848 "data_size": 63488 00:21:26.848 }, 00:21:26.848 { 00:21:26.848 "name": "BaseBdev4", 00:21:26.848 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:26.848 "is_configured": true, 00:21:26.848 "data_offset": 2048, 00:21:26.848 "data_size": 63488 00:21:26.848 } 00:21:26.848 ] 00:21:26.848 }' 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.848 11:33:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.416 11:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:27.416 11:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.416 11:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.416 [2024-11-20 11:33:35.028411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.416 [2024-11-20 11:33:35.028657] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:27.416 [2024-11-20 11:33:35.028685] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:27.416 [2024-11-20 11:33:35.028742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.416 [2024-11-20 11:33:35.042292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:21:27.416 11:33:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.416 11:33:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:27.416 [2024-11-20 11:33:35.051104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.349 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.349 "name": "raid_bdev1", 00:21:28.349 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:28.349 "strip_size_kb": 64, 00:21:28.349 "state": "online", 00:21:28.349 "raid_level": "raid5f", 00:21:28.349 "superblock": true, 00:21:28.350 "num_base_bdevs": 4, 00:21:28.350 "num_base_bdevs_discovered": 4, 00:21:28.350 "num_base_bdevs_operational": 4, 00:21:28.350 "process": { 00:21:28.350 "type": "rebuild", 00:21:28.350 "target": "spare", 00:21:28.350 "progress": { 00:21:28.350 "blocks": 17280, 00:21:28.350 "percent": 9 00:21:28.350 } 00:21:28.350 }, 00:21:28.350 "base_bdevs_list": [ 00:21:28.350 { 00:21:28.350 "name": "spare", 00:21:28.350 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:28.350 "is_configured": true, 00:21:28.350 "data_offset": 2048, 00:21:28.350 "data_size": 63488 00:21:28.350 }, 00:21:28.350 { 00:21:28.350 "name": "BaseBdev2", 00:21:28.350 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:28.350 "is_configured": true, 00:21:28.350 "data_offset": 2048, 00:21:28.350 "data_size": 63488 00:21:28.350 }, 00:21:28.350 { 00:21:28.350 "name": "BaseBdev3", 00:21:28.350 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:28.350 "is_configured": true, 00:21:28.350 "data_offset": 2048, 00:21:28.350 "data_size": 63488 00:21:28.350 }, 00:21:28.350 { 00:21:28.350 "name": "BaseBdev4", 00:21:28.350 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:28.350 "is_configured": true, 00:21:28.350 "data_offset": 2048, 00:21:28.350 "data_size": 63488 00:21:28.350 } 00:21:28.350 ] 00:21:28.350 }' 00:21:28.350 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.350 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.350 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.644 [2024-11-20 11:33:36.212282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.644 [2024-11-20 11:33:36.264270] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:28.644 [2024-11-20 11:33:36.264428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.644 [2024-11-20 11:33:36.264457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.644 [2024-11-20 11:33:36.264476] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.644 "name": "raid_bdev1", 00:21:28.644 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:28.644 "strip_size_kb": 64, 00:21:28.644 "state": "online", 00:21:28.644 "raid_level": "raid5f", 00:21:28.644 "superblock": true, 00:21:28.644 "num_base_bdevs": 4, 00:21:28.644 "num_base_bdevs_discovered": 3, 00:21:28.644 "num_base_bdevs_operational": 3, 00:21:28.644 "base_bdevs_list": [ 00:21:28.644 { 00:21:28.644 "name": null, 00:21:28.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.644 "is_configured": false, 00:21:28.644 "data_offset": 0, 00:21:28.644 "data_size": 63488 00:21:28.644 }, 00:21:28.644 { 00:21:28.644 "name": "BaseBdev2", 00:21:28.644 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:28.644 "is_configured": true, 00:21:28.644 "data_offset": 2048, 00:21:28.644 "data_size": 63488 00:21:28.644 }, 00:21:28.644 { 00:21:28.644 "name": "BaseBdev3", 00:21:28.644 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:28.644 "is_configured": true, 00:21:28.644 "data_offset": 2048, 00:21:28.644 "data_size": 63488 00:21:28.644 }, 00:21:28.644 { 00:21:28.644 "name": "BaseBdev4", 00:21:28.644 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:28.644 "is_configured": true, 00:21:28.644 "data_offset": 2048, 00:21:28.644 "data_size": 63488 00:21:28.644 } 00:21:28.644 ] 00:21:28.644 }' 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.644 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.211 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:29.211 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.211 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.211 [2024-11-20 11:33:36.776266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:29.211 [2024-11-20 11:33:36.776354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.211 [2024-11-20 11:33:36.776393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:29.211 [2024-11-20 11:33:36.776411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.211 [2024-11-20 11:33:36.777051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.211 [2024-11-20 11:33:36.777090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:29.211 [2024-11-20 11:33:36.777207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:29.211 [2024-11-20 11:33:36.777233] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:29.211 [2024-11-20 11:33:36.777248] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:29.211 [2024-11-20 11:33:36.777284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.211 [2024-11-20 11:33:36.790684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:21:29.211 spare 00:21:29.211 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.211 11:33:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:29.211 [2024-11-20 11:33:36.799415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.145 "name": "raid_bdev1", 00:21:30.145 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:30.145 "strip_size_kb": 64, 00:21:30.145 "state": "online", 00:21:30.145 "raid_level": "raid5f", 00:21:30.145 "superblock": true, 00:21:30.145 "num_base_bdevs": 4, 00:21:30.145 "num_base_bdevs_discovered": 4, 00:21:30.145 "num_base_bdevs_operational": 4, 00:21:30.145 "process": { 00:21:30.145 "type": "rebuild", 00:21:30.145 "target": "spare", 00:21:30.145 "progress": { 00:21:30.145 "blocks": 17280, 00:21:30.145 "percent": 9 00:21:30.145 } 00:21:30.145 }, 00:21:30.145 "base_bdevs_list": [ 00:21:30.145 { 00:21:30.145 "name": "spare", 00:21:30.145 "uuid": "cc9f63f7-47af-5d33-9fbc-2378ce410f93", 00:21:30.145 "is_configured": true, 00:21:30.145 "data_offset": 2048, 00:21:30.145 "data_size": 63488 00:21:30.145 }, 00:21:30.145 { 00:21:30.145 "name": "BaseBdev2", 00:21:30.145 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:30.145 "is_configured": true, 00:21:30.145 "data_offset": 2048, 00:21:30.145 "data_size": 63488 00:21:30.145 }, 00:21:30.145 { 00:21:30.145 "name": "BaseBdev3", 00:21:30.145 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:30.145 "is_configured": true, 00:21:30.145 "data_offset": 2048, 00:21:30.145 "data_size": 63488 00:21:30.145 }, 00:21:30.145 { 00:21:30.145 "name": "BaseBdev4", 00:21:30.145 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:30.145 "is_configured": true, 00:21:30.145 "data_offset": 2048, 00:21:30.145 "data_size": 63488 00:21:30.145 } 00:21:30.145 ] 00:21:30.145 }' 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.145 11:33:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.145 [2024-11-20 11:33:37.941450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.405 [2024-11-20 11:33:38.012431] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:30.405 [2024-11-20 11:33:38.012519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.405 [2024-11-20 11:33:38.012548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.405 [2024-11-20 11:33:38.012559] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.405 "name": "raid_bdev1", 00:21:30.405 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:30.405 "strip_size_kb": 64, 00:21:30.405 "state": "online", 00:21:30.405 "raid_level": "raid5f", 00:21:30.405 "superblock": true, 00:21:30.405 "num_base_bdevs": 4, 00:21:30.405 "num_base_bdevs_discovered": 3, 00:21:30.405 "num_base_bdevs_operational": 3, 00:21:30.405 "base_bdevs_list": [ 00:21:30.405 { 00:21:30.405 "name": null, 00:21:30.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.405 "is_configured": false, 00:21:30.405 "data_offset": 0, 00:21:30.405 "data_size": 63488 00:21:30.405 }, 00:21:30.405 { 00:21:30.405 "name": "BaseBdev2", 00:21:30.405 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:30.405 "is_configured": true, 00:21:30.405 "data_offset": 2048, 00:21:30.405 "data_size": 63488 00:21:30.405 }, 00:21:30.405 { 00:21:30.405 "name": "BaseBdev3", 00:21:30.405 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:30.405 "is_configured": true, 00:21:30.405 "data_offset": 2048, 00:21:30.405 "data_size": 63488 00:21:30.405 }, 00:21:30.405 { 00:21:30.405 "name": "BaseBdev4", 00:21:30.405 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:30.405 "is_configured": true, 00:21:30.405 "data_offset": 2048, 00:21:30.405 "data_size": 63488 00:21:30.405 } 00:21:30.405 ] 00:21:30.405 }' 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.405 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.973 "name": "raid_bdev1", 00:21:30.973 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:30.973 "strip_size_kb": 64, 00:21:30.973 "state": "online", 00:21:30.973 "raid_level": "raid5f", 00:21:30.973 "superblock": true, 00:21:30.973 "num_base_bdevs": 4, 00:21:30.973 "num_base_bdevs_discovered": 3, 00:21:30.973 "num_base_bdevs_operational": 3, 00:21:30.973 "base_bdevs_list": [ 00:21:30.973 { 00:21:30.973 "name": null, 00:21:30.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.973 "is_configured": false, 00:21:30.973 "data_offset": 0, 00:21:30.973 "data_size": 63488 00:21:30.973 }, 00:21:30.973 { 00:21:30.973 "name": "BaseBdev2", 00:21:30.973 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:30.973 "is_configured": true, 00:21:30.973 "data_offset": 2048, 00:21:30.973 "data_size": 63488 00:21:30.973 }, 00:21:30.973 { 00:21:30.973 "name": "BaseBdev3", 00:21:30.973 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:30.973 "is_configured": true, 00:21:30.973 "data_offset": 2048, 00:21:30.973 "data_size": 63488 00:21:30.973 }, 00:21:30.973 { 00:21:30.973 "name": "BaseBdev4", 00:21:30.973 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:30.973 "is_configured": true, 00:21:30.973 "data_offset": 2048, 00:21:30.973 "data_size": 63488 00:21:30.973 } 00:21:30.973 ] 00:21:30.973 }' 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.973 [2024-11-20 11:33:38.727392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:30.973 [2024-11-20 11:33:38.727479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.973 [2024-11-20 11:33:38.727508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:30.973 [2024-11-20 11:33:38.727522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.973 [2024-11-20 11:33:38.728142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.973 [2024-11-20 11:33:38.728174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:30.973 [2024-11-20 11:33:38.728272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:30.973 [2024-11-20 11:33:38.728299] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:30.973 [2024-11-20 11:33:38.728315] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:30.973 [2024-11-20 11:33:38.728343] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:30.973 BaseBdev1 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.973 11:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:31.913 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:31.913 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:31.913 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.913 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.913 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.914 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.172 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.172 "name": "raid_bdev1", 00:21:32.172 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:32.172 "strip_size_kb": 64, 00:21:32.172 "state": "online", 00:21:32.172 "raid_level": "raid5f", 00:21:32.172 "superblock": true, 00:21:32.172 "num_base_bdevs": 4, 00:21:32.172 "num_base_bdevs_discovered": 3, 00:21:32.172 "num_base_bdevs_operational": 3, 00:21:32.172 "base_bdevs_list": [ 00:21:32.172 { 00:21:32.172 "name": null, 00:21:32.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.172 "is_configured": false, 00:21:32.172 "data_offset": 0, 00:21:32.172 "data_size": 63488 00:21:32.172 }, 00:21:32.172 { 00:21:32.172 "name": "BaseBdev2", 00:21:32.172 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:32.172 "is_configured": true, 00:21:32.172 "data_offset": 2048, 00:21:32.172 "data_size": 63488 00:21:32.173 }, 00:21:32.173 { 00:21:32.173 "name": "BaseBdev3", 00:21:32.173 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:32.173 "is_configured": true, 00:21:32.173 "data_offset": 2048, 00:21:32.173 "data_size": 63488 00:21:32.173 }, 00:21:32.173 { 00:21:32.173 "name": "BaseBdev4", 00:21:32.173 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:32.173 "is_configured": true, 00:21:32.173 "data_offset": 2048, 00:21:32.173 "data_size": 63488 00:21:32.173 } 00:21:32.173 ] 00:21:32.173 }' 00:21:32.173 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.173 11:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.431 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.689 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.690 "name": "raid_bdev1", 00:21:32.690 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:32.690 "strip_size_kb": 64, 00:21:32.690 "state": "online", 00:21:32.690 "raid_level": "raid5f", 00:21:32.690 "superblock": true, 00:21:32.690 "num_base_bdevs": 4, 00:21:32.690 "num_base_bdevs_discovered": 3, 00:21:32.690 "num_base_bdevs_operational": 3, 00:21:32.690 "base_bdevs_list": [ 00:21:32.690 { 00:21:32.690 "name": null, 00:21:32.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.690 "is_configured": false, 00:21:32.690 "data_offset": 0, 00:21:32.690 "data_size": 63488 00:21:32.690 }, 00:21:32.690 { 00:21:32.690 "name": "BaseBdev2", 00:21:32.690 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:32.690 "is_configured": true, 00:21:32.690 "data_offset": 2048, 00:21:32.690 "data_size": 63488 00:21:32.690 }, 00:21:32.690 { 00:21:32.690 "name": "BaseBdev3", 00:21:32.690 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:32.690 "is_configured": true, 00:21:32.690 "data_offset": 2048, 00:21:32.690 "data_size": 63488 00:21:32.690 }, 00:21:32.690 { 00:21:32.690 "name": "BaseBdev4", 00:21:32.690 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:32.690 "is_configured": true, 00:21:32.690 "data_offset": 2048, 00:21:32.690 "data_size": 63488 00:21:32.690 } 00:21:32.690 ] 00:21:32.690 }' 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.690 [2024-11-20 11:33:40.420043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.690 [2024-11-20 11:33:40.420248] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:32.690 [2024-11-20 11:33:40.420273] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:32.690 request: 00:21:32.690 { 00:21:32.690 "base_bdev": "BaseBdev1", 00:21:32.690 "raid_bdev": "raid_bdev1", 00:21:32.690 "method": "bdev_raid_add_base_bdev", 00:21:32.690 "req_id": 1 00:21:32.690 } 00:21:32.690 Got JSON-RPC error response 00:21:32.690 response: 00:21:32.690 { 00:21:32.690 "code": -22, 00:21:32.690 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:32.690 } 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.690 11:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.626 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.884 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.884 "name": "raid_bdev1", 00:21:33.884 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:33.884 "strip_size_kb": 64, 00:21:33.884 "state": "online", 00:21:33.884 "raid_level": "raid5f", 00:21:33.884 "superblock": true, 00:21:33.884 "num_base_bdevs": 4, 00:21:33.884 "num_base_bdevs_discovered": 3, 00:21:33.884 "num_base_bdevs_operational": 3, 00:21:33.884 "base_bdevs_list": [ 00:21:33.884 { 00:21:33.885 "name": null, 00:21:33.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.885 "is_configured": false, 00:21:33.885 "data_offset": 0, 00:21:33.885 "data_size": 63488 00:21:33.885 }, 00:21:33.885 { 00:21:33.885 "name": "BaseBdev2", 00:21:33.885 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:33.885 "is_configured": true, 00:21:33.885 "data_offset": 2048, 00:21:33.885 "data_size": 63488 00:21:33.885 }, 00:21:33.885 { 00:21:33.885 "name": "BaseBdev3", 00:21:33.885 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:33.885 "is_configured": true, 00:21:33.885 "data_offset": 2048, 00:21:33.885 "data_size": 63488 00:21:33.885 }, 00:21:33.885 { 00:21:33.885 "name": "BaseBdev4", 00:21:33.885 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:33.885 "is_configured": true, 00:21:33.885 "data_offset": 2048, 00:21:33.885 "data_size": 63488 00:21:33.885 } 00:21:33.885 ] 00:21:33.885 }' 00:21:33.885 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.885 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.144 11:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.403 "name": "raid_bdev1", 00:21:34.403 "uuid": "1d01b9dc-d73b-4704-8d24-d079184bf3fe", 00:21:34.403 "strip_size_kb": 64, 00:21:34.403 "state": "online", 00:21:34.403 "raid_level": "raid5f", 00:21:34.403 "superblock": true, 00:21:34.403 "num_base_bdevs": 4, 00:21:34.403 "num_base_bdevs_discovered": 3, 00:21:34.403 "num_base_bdevs_operational": 3, 00:21:34.403 "base_bdevs_list": [ 00:21:34.403 { 00:21:34.403 "name": null, 00:21:34.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.403 "is_configured": false, 00:21:34.403 "data_offset": 0, 00:21:34.403 "data_size": 63488 00:21:34.403 }, 00:21:34.403 { 00:21:34.403 "name": "BaseBdev2", 00:21:34.403 "uuid": "7b6472af-360b-575c-92f5-328cb8073ce5", 00:21:34.403 "is_configured": true, 00:21:34.403 "data_offset": 2048, 00:21:34.403 "data_size": 63488 00:21:34.403 }, 00:21:34.403 { 00:21:34.403 "name": "BaseBdev3", 00:21:34.403 "uuid": "b9a46bb0-97b6-55df-8faf-a08fccd61e78", 00:21:34.403 "is_configured": true, 00:21:34.403 "data_offset": 2048, 00:21:34.403 "data_size": 63488 00:21:34.403 }, 00:21:34.403 { 00:21:34.403 "name": "BaseBdev4", 00:21:34.403 "uuid": "8dd77cde-6559-501f-8a6c-8d5b445ea259", 00:21:34.403 "is_configured": true, 00:21:34.403 "data_offset": 2048, 00:21:34.403 "data_size": 63488 00:21:34.403 } 00:21:34.403 ] 00:21:34.403 }' 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85446 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85446 ']' 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85446 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85446 00:21:34.403 killing process with pid 85446 00:21:34.403 Received shutdown signal, test time was about 60.000000 seconds 00:21:34.403 00:21:34.403 Latency(us) 00:21:34.403 [2024-11-20T11:33:42.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.403 [2024-11-20T11:33:42.249Z] =================================================================================================================== 00:21:34.403 [2024-11-20T11:33:42.249Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85446' 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85446 00:21:34.403 [2024-11-20 11:33:42.148097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.403 11:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85446 00:21:34.403 [2024-11-20 11:33:42.148249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.403 [2024-11-20 11:33:42.148352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.403 [2024-11-20 11:33:42.148375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:34.972 [2024-11-20 11:33:42.597772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.909 11:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:35.909 00:21:35.909 real 0m28.633s 00:21:35.909 user 0m37.410s 00:21:35.909 sys 0m2.788s 00:21:35.909 ************************************ 00:21:35.909 END TEST raid5f_rebuild_test_sb 00:21:35.909 ************************************ 00:21:35.909 11:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.909 11:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.909 11:33:43 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:21:35.909 11:33:43 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:35.909 11:33:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:35.909 11:33:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.909 11:33:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.909 ************************************ 00:21:35.909 START TEST raid_state_function_test_sb_4k 00:21:35.909 ************************************ 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:35.909 Process raid pid: 86270 00:21:35.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.909 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86270 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86270' 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86270 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86270 ']' 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.910 11:33:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.168 [2024-11-20 11:33:43.822881] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:21:36.168 [2024-11-20 11:33:43.823280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.168 [2024-11-20 11:33:44.008736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.427 [2024-11-20 11:33:44.140347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.685 [2024-11-20 11:33:44.347785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.685 [2024-11-20 11:33:44.348024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.254 [2024-11-20 11:33:44.864719] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.254 [2024-11-20 11:33:44.864794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.254 [2024-11-20 11:33:44.864819] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.254 [2024-11-20 11:33:44.864835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.254 "name": "Existed_Raid", 00:21:37.254 "uuid": "587eb6d2-0107-4897-a036-9628246089dc", 00:21:37.254 "strip_size_kb": 0, 00:21:37.254 "state": "configuring", 00:21:37.254 "raid_level": "raid1", 00:21:37.254 "superblock": true, 00:21:37.254 "num_base_bdevs": 2, 00:21:37.254 "num_base_bdevs_discovered": 0, 00:21:37.254 "num_base_bdevs_operational": 2, 00:21:37.254 "base_bdevs_list": [ 00:21:37.254 { 00:21:37.254 "name": "BaseBdev1", 00:21:37.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.254 "is_configured": false, 00:21:37.254 "data_offset": 0, 00:21:37.254 "data_size": 0 00:21:37.254 }, 00:21:37.254 { 00:21:37.254 "name": "BaseBdev2", 00:21:37.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.254 "is_configured": false, 00:21:37.254 "data_offset": 0, 00:21:37.254 "data_size": 0 00:21:37.254 } 00:21:37.254 ] 00:21:37.254 }' 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.254 11:33:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.513 [2024-11-20 11:33:45.332818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.513 [2024-11-20 11:33:45.332861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.513 [2024-11-20 11:33:45.340768] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.513 [2024-11-20 11:33:45.340831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.513 [2024-11-20 11:33:45.340846] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.513 [2024-11-20 11:33:45.340865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.513 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.772 [2024-11-20 11:33:45.391106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.772 BaseBdev1 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.772 [ 00:21:37.772 { 00:21:37.772 "name": "BaseBdev1", 00:21:37.772 "aliases": [ 00:21:37.772 "9506baec-7a9a-443f-9207-971954d0009b" 00:21:37.772 ], 00:21:37.772 "product_name": "Malloc disk", 00:21:37.772 "block_size": 4096, 00:21:37.772 "num_blocks": 8192, 00:21:37.772 "uuid": "9506baec-7a9a-443f-9207-971954d0009b", 00:21:37.772 "assigned_rate_limits": { 00:21:37.772 "rw_ios_per_sec": 0, 00:21:37.772 "rw_mbytes_per_sec": 0, 00:21:37.772 "r_mbytes_per_sec": 0, 00:21:37.772 "w_mbytes_per_sec": 0 00:21:37.772 }, 00:21:37.772 "claimed": true, 00:21:37.772 "claim_type": "exclusive_write", 00:21:37.772 "zoned": false, 00:21:37.772 "supported_io_types": { 00:21:37.772 "read": true, 00:21:37.772 "write": true, 00:21:37.772 "unmap": true, 00:21:37.772 "flush": true, 00:21:37.772 "reset": true, 00:21:37.772 "nvme_admin": false, 00:21:37.772 "nvme_io": false, 00:21:37.772 "nvme_io_md": false, 00:21:37.772 "write_zeroes": true, 00:21:37.772 "zcopy": true, 00:21:37.772 "get_zone_info": false, 00:21:37.772 "zone_management": false, 00:21:37.772 "zone_append": false, 00:21:37.772 "compare": false, 00:21:37.772 "compare_and_write": false, 00:21:37.772 "abort": true, 00:21:37.772 "seek_hole": false, 00:21:37.772 "seek_data": false, 00:21:37.772 "copy": true, 00:21:37.772 "nvme_iov_md": false 00:21:37.772 }, 00:21:37.772 "memory_domains": [ 00:21:37.772 { 00:21:37.772 "dma_device_id": "system", 00:21:37.772 "dma_device_type": 1 00:21:37.772 }, 00:21:37.772 { 00:21:37.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.772 "dma_device_type": 2 00:21:37.772 } 00:21:37.772 ], 00:21:37.772 "driver_specific": {} 00:21:37.772 } 00:21:37.772 ] 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.772 "name": "Existed_Raid", 00:21:37.772 "uuid": "a3f2761d-fd80-4c1f-8645-cde540539587", 00:21:37.772 "strip_size_kb": 0, 00:21:37.772 "state": "configuring", 00:21:37.772 "raid_level": "raid1", 00:21:37.772 "superblock": true, 00:21:37.772 "num_base_bdevs": 2, 00:21:37.772 "num_base_bdevs_discovered": 1, 00:21:37.772 "num_base_bdevs_operational": 2, 00:21:37.772 "base_bdevs_list": [ 00:21:37.772 { 00:21:37.772 "name": "BaseBdev1", 00:21:37.772 "uuid": "9506baec-7a9a-443f-9207-971954d0009b", 00:21:37.772 "is_configured": true, 00:21:37.772 "data_offset": 256, 00:21:37.772 "data_size": 7936 00:21:37.772 }, 00:21:37.772 { 00:21:37.772 "name": "BaseBdev2", 00:21:37.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.772 "is_configured": false, 00:21:37.772 "data_offset": 0, 00:21:37.772 "data_size": 0 00:21:37.772 } 00:21:37.772 ] 00:21:37.772 }' 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.772 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.340 [2024-11-20 11:33:45.959303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.340 [2024-11-20 11:33:45.959576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.340 [2024-11-20 11:33:45.971314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.340 [2024-11-20 11:33:45.973969] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.340 [2024-11-20 11:33:45.974182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.340 11:33:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.340 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.340 "name": "Existed_Raid", 00:21:38.340 "uuid": "9e822b6f-4acb-4758-961d-b6e632fa1a4b", 00:21:38.340 "strip_size_kb": 0, 00:21:38.340 "state": "configuring", 00:21:38.340 "raid_level": "raid1", 00:21:38.340 "superblock": true, 00:21:38.340 "num_base_bdevs": 2, 00:21:38.340 "num_base_bdevs_discovered": 1, 00:21:38.340 "num_base_bdevs_operational": 2, 00:21:38.340 "base_bdevs_list": [ 00:21:38.340 { 00:21:38.340 "name": "BaseBdev1", 00:21:38.340 "uuid": "9506baec-7a9a-443f-9207-971954d0009b", 00:21:38.340 "is_configured": true, 00:21:38.340 "data_offset": 256, 00:21:38.340 "data_size": 7936 00:21:38.340 }, 00:21:38.340 { 00:21:38.340 "name": "BaseBdev2", 00:21:38.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.340 "is_configured": false, 00:21:38.340 "data_offset": 0, 00:21:38.340 "data_size": 0 00:21:38.340 } 00:21:38.340 ] 00:21:38.340 }' 00:21:38.340 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.340 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.907 [2024-11-20 11:33:46.527425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.907 [2024-11-20 11:33:46.527911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:38.907 [2024-11-20 11:33:46.527937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:38.907 BaseBdev2 00:21:38.907 [2024-11-20 11:33:46.528263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:38.907 [2024-11-20 11:33:46.528455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:38.907 [2024-11-20 11:33:46.528488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:38.907 [2024-11-20 11:33:46.528681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.907 [ 00:21:38.907 { 00:21:38.907 "name": "BaseBdev2", 00:21:38.907 "aliases": [ 00:21:38.907 "a2f0833f-0497-4d21-b4f3-77a7a5d2bcba" 00:21:38.907 ], 00:21:38.907 "product_name": "Malloc disk", 00:21:38.907 "block_size": 4096, 00:21:38.907 "num_blocks": 8192, 00:21:38.907 "uuid": "a2f0833f-0497-4d21-b4f3-77a7a5d2bcba", 00:21:38.907 "assigned_rate_limits": { 00:21:38.907 "rw_ios_per_sec": 0, 00:21:38.907 "rw_mbytes_per_sec": 0, 00:21:38.907 "r_mbytes_per_sec": 0, 00:21:38.907 "w_mbytes_per_sec": 0 00:21:38.907 }, 00:21:38.907 "claimed": true, 00:21:38.907 "claim_type": "exclusive_write", 00:21:38.907 "zoned": false, 00:21:38.907 "supported_io_types": { 00:21:38.907 "read": true, 00:21:38.907 "write": true, 00:21:38.907 "unmap": true, 00:21:38.907 "flush": true, 00:21:38.907 "reset": true, 00:21:38.907 "nvme_admin": false, 00:21:38.907 "nvme_io": false, 00:21:38.907 "nvme_io_md": false, 00:21:38.907 "write_zeroes": true, 00:21:38.907 "zcopy": true, 00:21:38.907 "get_zone_info": false, 00:21:38.907 "zone_management": false, 00:21:38.907 "zone_append": false, 00:21:38.907 "compare": false, 00:21:38.907 "compare_and_write": false, 00:21:38.907 "abort": true, 00:21:38.907 "seek_hole": false, 00:21:38.907 "seek_data": false, 00:21:38.907 "copy": true, 00:21:38.907 "nvme_iov_md": false 00:21:38.907 }, 00:21:38.907 "memory_domains": [ 00:21:38.907 { 00:21:38.907 "dma_device_id": "system", 00:21:38.907 "dma_device_type": 1 00:21:38.907 }, 00:21:38.907 { 00:21:38.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.907 "dma_device_type": 2 00:21:38.907 } 00:21:38.907 ], 00:21:38.907 "driver_specific": {} 00:21:38.907 } 00:21:38.907 ] 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.907 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.908 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.908 "name": "Existed_Raid", 00:21:38.908 "uuid": "9e822b6f-4acb-4758-961d-b6e632fa1a4b", 00:21:38.908 "strip_size_kb": 0, 00:21:38.908 "state": "online", 00:21:38.908 "raid_level": "raid1", 00:21:38.908 "superblock": true, 00:21:38.908 "num_base_bdevs": 2, 00:21:38.908 "num_base_bdevs_discovered": 2, 00:21:38.908 "num_base_bdevs_operational": 2, 00:21:38.908 "base_bdevs_list": [ 00:21:38.908 { 00:21:38.908 "name": "BaseBdev1", 00:21:38.908 "uuid": "9506baec-7a9a-443f-9207-971954d0009b", 00:21:38.908 "is_configured": true, 00:21:38.908 "data_offset": 256, 00:21:38.908 "data_size": 7936 00:21:38.908 }, 00:21:38.908 { 00:21:38.908 "name": "BaseBdev2", 00:21:38.908 "uuid": "a2f0833f-0497-4d21-b4f3-77a7a5d2bcba", 00:21:38.908 "is_configured": true, 00:21:38.908 "data_offset": 256, 00:21:38.908 "data_size": 7936 00:21:38.908 } 00:21:38.908 ] 00:21:38.908 }' 00:21:38.908 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.908 11:33:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.474 [2024-11-20 11:33:47.108032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.474 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:39.474 "name": "Existed_Raid", 00:21:39.474 "aliases": [ 00:21:39.474 "9e822b6f-4acb-4758-961d-b6e632fa1a4b" 00:21:39.474 ], 00:21:39.474 "product_name": "Raid Volume", 00:21:39.474 "block_size": 4096, 00:21:39.474 "num_blocks": 7936, 00:21:39.474 "uuid": "9e822b6f-4acb-4758-961d-b6e632fa1a4b", 00:21:39.474 "assigned_rate_limits": { 00:21:39.474 "rw_ios_per_sec": 0, 00:21:39.474 "rw_mbytes_per_sec": 0, 00:21:39.474 "r_mbytes_per_sec": 0, 00:21:39.474 "w_mbytes_per_sec": 0 00:21:39.474 }, 00:21:39.474 "claimed": false, 00:21:39.474 "zoned": false, 00:21:39.474 "supported_io_types": { 00:21:39.474 "read": true, 00:21:39.474 "write": true, 00:21:39.474 "unmap": false, 00:21:39.474 "flush": false, 00:21:39.474 "reset": true, 00:21:39.474 "nvme_admin": false, 00:21:39.474 "nvme_io": false, 00:21:39.474 "nvme_io_md": false, 00:21:39.474 "write_zeroes": true, 00:21:39.474 "zcopy": false, 00:21:39.474 "get_zone_info": false, 00:21:39.474 "zone_management": false, 00:21:39.474 "zone_append": false, 00:21:39.474 "compare": false, 00:21:39.474 "compare_and_write": false, 00:21:39.474 "abort": false, 00:21:39.474 "seek_hole": false, 00:21:39.474 "seek_data": false, 00:21:39.474 "copy": false, 00:21:39.474 "nvme_iov_md": false 00:21:39.474 }, 00:21:39.474 "memory_domains": [ 00:21:39.474 { 00:21:39.474 "dma_device_id": "system", 00:21:39.474 "dma_device_type": 1 00:21:39.474 }, 00:21:39.474 { 00:21:39.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.474 "dma_device_type": 2 00:21:39.474 }, 00:21:39.474 { 00:21:39.474 "dma_device_id": "system", 00:21:39.474 "dma_device_type": 1 00:21:39.474 }, 00:21:39.474 { 00:21:39.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.474 "dma_device_type": 2 00:21:39.474 } 00:21:39.474 ], 00:21:39.474 "driver_specific": { 00:21:39.474 "raid": { 00:21:39.474 "uuid": "9e822b6f-4acb-4758-961d-b6e632fa1a4b", 00:21:39.474 "strip_size_kb": 0, 00:21:39.474 "state": "online", 00:21:39.474 "raid_level": "raid1", 00:21:39.474 "superblock": true, 00:21:39.474 "num_base_bdevs": 2, 00:21:39.474 "num_base_bdevs_discovered": 2, 00:21:39.474 "num_base_bdevs_operational": 2, 00:21:39.474 "base_bdevs_list": [ 00:21:39.474 { 00:21:39.474 "name": "BaseBdev1", 00:21:39.475 "uuid": "9506baec-7a9a-443f-9207-971954d0009b", 00:21:39.475 "is_configured": true, 00:21:39.475 "data_offset": 256, 00:21:39.475 "data_size": 7936 00:21:39.475 }, 00:21:39.475 { 00:21:39.475 "name": "BaseBdev2", 00:21:39.475 "uuid": "a2f0833f-0497-4d21-b4f3-77a7a5d2bcba", 00:21:39.475 "is_configured": true, 00:21:39.475 "data_offset": 256, 00:21:39.475 "data_size": 7936 00:21:39.475 } 00:21:39.475 ] 00:21:39.475 } 00:21:39.475 } 00:21:39.475 }' 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:39.475 BaseBdev2' 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.475 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.734 [2024-11-20 11:33:47.399856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.734 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.734 "name": "Existed_Raid", 00:21:39.734 "uuid": "9e822b6f-4acb-4758-961d-b6e632fa1a4b", 00:21:39.734 "strip_size_kb": 0, 00:21:39.734 "state": "online", 00:21:39.734 "raid_level": "raid1", 00:21:39.734 "superblock": true, 00:21:39.734 "num_base_bdevs": 2, 00:21:39.734 "num_base_bdevs_discovered": 1, 00:21:39.734 "num_base_bdevs_operational": 1, 00:21:39.734 "base_bdevs_list": [ 00:21:39.734 { 00:21:39.734 "name": null, 00:21:39.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.734 "is_configured": false, 00:21:39.734 "data_offset": 0, 00:21:39.734 "data_size": 7936 00:21:39.734 }, 00:21:39.734 { 00:21:39.734 "name": "BaseBdev2", 00:21:39.734 "uuid": "a2f0833f-0497-4d21-b4f3-77a7a5d2bcba", 00:21:39.734 "is_configured": true, 00:21:39.734 "data_offset": 256, 00:21:39.734 "data_size": 7936 00:21:39.734 } 00:21:39.734 ] 00:21:39.734 }' 00:21:39.735 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.735 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.301 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:40.302 11:33:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.302 [2024-11-20 11:33:48.054696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.302 [2024-11-20 11:33:48.054844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.302 [2024-11-20 11:33:48.144016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.302 [2024-11-20 11:33:48.144279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.302 [2024-11-20 11:33:48.144428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.302 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86270 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86270 ']' 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86270 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86270 00:21:40.561 killing process with pid 86270 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86270' 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86270 00:21:40.561 [2024-11-20 11:33:48.235149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:40.561 11:33:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86270 00:21:40.561 [2024-11-20 11:33:48.250547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.507 11:33:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:21:41.507 00:21:41.507 real 0m5.613s 00:21:41.507 user 0m8.471s 00:21:41.507 sys 0m0.807s 00:21:41.507 11:33:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.507 11:33:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.507 ************************************ 00:21:41.507 END TEST raid_state_function_test_sb_4k 00:21:41.507 ************************************ 00:21:41.764 11:33:49 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:41.764 11:33:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:41.764 11:33:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.764 11:33:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.764 ************************************ 00:21:41.764 START TEST raid_superblock_test_4k 00:21:41.764 ************************************ 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86523 00:21:41.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86523 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86523 ']' 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.764 11:33:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.764 [2024-11-20 11:33:49.514263] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:21:41.764 [2024-11-20 11:33:49.514444] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86523 ] 00:21:42.023 [2024-11-20 11:33:49.706206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.023 [2024-11-20 11:33:49.863007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.281 [2024-11-20 11:33:50.076339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.281 [2024-11-20 11:33:50.076416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.848 malloc1 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.848 [2024-11-20 11:33:50.574932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:42.848 [2024-11-20 11:33:50.575157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.848 [2024-11-20 11:33:50.575229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:42.848 [2024-11-20 11:33:50.575351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.848 [2024-11-20 11:33:50.578126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.848 [2024-11-20 11:33:50.578303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:42.848 pt1 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.848 malloc2 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.848 [2024-11-20 11:33:50.631336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.848 [2024-11-20 11:33:50.631409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.848 [2024-11-20 11:33:50.631439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:42.848 [2024-11-20 11:33:50.631454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.848 [2024-11-20 11:33:50.634291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.848 [2024-11-20 11:33:50.634336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.848 pt2 00:21:42.848 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.849 [2024-11-20 11:33:50.643448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:42.849 [2024-11-20 11:33:50.646041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.849 [2024-11-20 11:33:50.646314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:42.849 [2024-11-20 11:33:50.646339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:42.849 [2024-11-20 11:33:50.646700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:42.849 [2024-11-20 11:33:50.646922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:42.849 [2024-11-20 11:33:50.646959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:42.849 [2024-11-20 11:33:50.647178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.849 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.141 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.141 "name": "raid_bdev1", 00:21:43.141 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:43.141 "strip_size_kb": 0, 00:21:43.141 "state": "online", 00:21:43.141 "raid_level": "raid1", 00:21:43.141 "superblock": true, 00:21:43.141 "num_base_bdevs": 2, 00:21:43.141 "num_base_bdevs_discovered": 2, 00:21:43.141 "num_base_bdevs_operational": 2, 00:21:43.141 "base_bdevs_list": [ 00:21:43.141 { 00:21:43.141 "name": "pt1", 00:21:43.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.141 "is_configured": true, 00:21:43.141 "data_offset": 256, 00:21:43.141 "data_size": 7936 00:21:43.141 }, 00:21:43.141 { 00:21:43.141 "name": "pt2", 00:21:43.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.141 "is_configured": true, 00:21:43.141 "data_offset": 256, 00:21:43.141 "data_size": 7936 00:21:43.141 } 00:21:43.141 ] 00:21:43.141 }' 00:21:43.141 11:33:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.141 11:33:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:43.402 [2024-11-20 11:33:51.143885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:43.402 "name": "raid_bdev1", 00:21:43.402 "aliases": [ 00:21:43.402 "0da11421-2537-4151-a58c-1ec68c233741" 00:21:43.402 ], 00:21:43.402 "product_name": "Raid Volume", 00:21:43.402 "block_size": 4096, 00:21:43.402 "num_blocks": 7936, 00:21:43.402 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:43.402 "assigned_rate_limits": { 00:21:43.402 "rw_ios_per_sec": 0, 00:21:43.402 "rw_mbytes_per_sec": 0, 00:21:43.402 "r_mbytes_per_sec": 0, 00:21:43.402 "w_mbytes_per_sec": 0 00:21:43.402 }, 00:21:43.402 "claimed": false, 00:21:43.402 "zoned": false, 00:21:43.402 "supported_io_types": { 00:21:43.402 "read": true, 00:21:43.402 "write": true, 00:21:43.402 "unmap": false, 00:21:43.402 "flush": false, 00:21:43.402 "reset": true, 00:21:43.402 "nvme_admin": false, 00:21:43.402 "nvme_io": false, 00:21:43.402 "nvme_io_md": false, 00:21:43.402 "write_zeroes": true, 00:21:43.402 "zcopy": false, 00:21:43.402 "get_zone_info": false, 00:21:43.402 "zone_management": false, 00:21:43.402 "zone_append": false, 00:21:43.402 "compare": false, 00:21:43.402 "compare_and_write": false, 00:21:43.402 "abort": false, 00:21:43.402 "seek_hole": false, 00:21:43.402 "seek_data": false, 00:21:43.402 "copy": false, 00:21:43.402 "nvme_iov_md": false 00:21:43.402 }, 00:21:43.402 "memory_domains": [ 00:21:43.402 { 00:21:43.402 "dma_device_id": "system", 00:21:43.402 "dma_device_type": 1 00:21:43.402 }, 00:21:43.402 { 00:21:43.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.402 "dma_device_type": 2 00:21:43.402 }, 00:21:43.402 { 00:21:43.402 "dma_device_id": "system", 00:21:43.402 "dma_device_type": 1 00:21:43.402 }, 00:21:43.402 { 00:21:43.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.402 "dma_device_type": 2 00:21:43.402 } 00:21:43.402 ], 00:21:43.402 "driver_specific": { 00:21:43.402 "raid": { 00:21:43.402 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:43.402 "strip_size_kb": 0, 00:21:43.402 "state": "online", 00:21:43.402 "raid_level": "raid1", 00:21:43.402 "superblock": true, 00:21:43.402 "num_base_bdevs": 2, 00:21:43.402 "num_base_bdevs_discovered": 2, 00:21:43.402 "num_base_bdevs_operational": 2, 00:21:43.402 "base_bdevs_list": [ 00:21:43.402 { 00:21:43.402 "name": "pt1", 00:21:43.402 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.402 "is_configured": true, 00:21:43.402 "data_offset": 256, 00:21:43.402 "data_size": 7936 00:21:43.402 }, 00:21:43.402 { 00:21:43.402 "name": "pt2", 00:21:43.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.402 "is_configured": true, 00:21:43.402 "data_offset": 256, 00:21:43.402 "data_size": 7936 00:21:43.402 } 00:21:43.402 ] 00:21:43.402 } 00:21:43.402 } 00:21:43.402 }' 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:43.402 pt2' 00:21:43.402 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 [2024-11-20 11:33:51.375919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0da11421-2537-4151-a58c-1ec68c233741 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 0da11421-2537-4151-a58c-1ec68c233741 ']' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 [2024-11-20 11:33:51.415566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.663 [2024-11-20 11:33:51.415733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.663 [2024-11-20 11:33:51.415944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.663 [2024-11-20 11:33:51.416134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.663 [2024-11-20 11:33:51.416335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.663 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.924 [2024-11-20 11:33:51.559669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:43.924 [2024-11-20 11:33:51.562199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:43.924 [2024-11-20 11:33:51.562295] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:43.924 [2024-11-20 11:33:51.562378] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:43.924 [2024-11-20 11:33:51.562405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.924 [2024-11-20 11:33:51.562420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:43.924 request: 00:21:43.924 { 00:21:43.924 "name": "raid_bdev1", 00:21:43.924 "raid_level": "raid1", 00:21:43.924 "base_bdevs": [ 00:21:43.924 "malloc1", 00:21:43.924 "malloc2" 00:21:43.924 ], 00:21:43.924 "superblock": false, 00:21:43.924 "method": "bdev_raid_create", 00:21:43.924 "req_id": 1 00:21:43.924 } 00:21:43.924 Got JSON-RPC error response 00:21:43.924 response: 00:21:43.924 { 00:21:43.924 "code": -17, 00:21:43.924 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:43.924 } 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:43.924 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.925 [2024-11-20 11:33:51.619714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.925 [2024-11-20 11:33:51.619922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.925 [2024-11-20 11:33:51.619992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:43.925 [2024-11-20 11:33:51.620146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.925 [2024-11-20 11:33:51.623174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.925 [2024-11-20 11:33:51.623331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.925 [2024-11-20 11:33:51.623578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:43.925 [2024-11-20 11:33:51.623856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.925 pt1 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.925 "name": "raid_bdev1", 00:21:43.925 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:43.925 "strip_size_kb": 0, 00:21:43.925 "state": "configuring", 00:21:43.925 "raid_level": "raid1", 00:21:43.925 "superblock": true, 00:21:43.925 "num_base_bdevs": 2, 00:21:43.925 "num_base_bdevs_discovered": 1, 00:21:43.925 "num_base_bdevs_operational": 2, 00:21:43.925 "base_bdevs_list": [ 00:21:43.925 { 00:21:43.925 "name": "pt1", 00:21:43.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.925 "is_configured": true, 00:21:43.925 "data_offset": 256, 00:21:43.925 "data_size": 7936 00:21:43.925 }, 00:21:43.925 { 00:21:43.925 "name": null, 00:21:43.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.925 "is_configured": false, 00:21:43.925 "data_offset": 256, 00:21:43.925 "data_size": 7936 00:21:43.925 } 00:21:43.925 ] 00:21:43.925 }' 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.925 11:33:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.504 [2024-11-20 11:33:52.099820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:44.504 [2024-11-20 11:33:52.099906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.504 [2024-11-20 11:33:52.099936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:44.504 [2024-11-20 11:33:52.099953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.504 [2024-11-20 11:33:52.100510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.504 [2024-11-20 11:33:52.100547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:44.504 [2024-11-20 11:33:52.100664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:44.504 [2024-11-20 11:33:52.100701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.504 [2024-11-20 11:33:52.100849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:44.504 [2024-11-20 11:33:52.100870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:44.504 [2024-11-20 11:33:52.101183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:44.504 [2024-11-20 11:33:52.101376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:44.504 [2024-11-20 11:33:52.101391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:44.504 [2024-11-20 11:33:52.101561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.504 pt2 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.504 "name": "raid_bdev1", 00:21:44.504 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:44.504 "strip_size_kb": 0, 00:21:44.504 "state": "online", 00:21:44.504 "raid_level": "raid1", 00:21:44.504 "superblock": true, 00:21:44.504 "num_base_bdevs": 2, 00:21:44.504 "num_base_bdevs_discovered": 2, 00:21:44.504 "num_base_bdevs_operational": 2, 00:21:44.504 "base_bdevs_list": [ 00:21:44.504 { 00:21:44.504 "name": "pt1", 00:21:44.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:44.504 "is_configured": true, 00:21:44.504 "data_offset": 256, 00:21:44.504 "data_size": 7936 00:21:44.504 }, 00:21:44.504 { 00:21:44.504 "name": "pt2", 00:21:44.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.504 "is_configured": true, 00:21:44.504 "data_offset": 256, 00:21:44.504 "data_size": 7936 00:21:44.504 } 00:21:44.504 ] 00:21:44.504 }' 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.504 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.763 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:44.763 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:44.763 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:44.763 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:44.763 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:44.763 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:45.022 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:45.022 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.022 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.022 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:45.022 [2024-11-20 11:33:52.612253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.022 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.022 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:45.022 "name": "raid_bdev1", 00:21:45.022 "aliases": [ 00:21:45.022 "0da11421-2537-4151-a58c-1ec68c233741" 00:21:45.022 ], 00:21:45.022 "product_name": "Raid Volume", 00:21:45.022 "block_size": 4096, 00:21:45.022 "num_blocks": 7936, 00:21:45.022 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:45.022 "assigned_rate_limits": { 00:21:45.022 "rw_ios_per_sec": 0, 00:21:45.022 "rw_mbytes_per_sec": 0, 00:21:45.022 "r_mbytes_per_sec": 0, 00:21:45.022 "w_mbytes_per_sec": 0 00:21:45.022 }, 00:21:45.022 "claimed": false, 00:21:45.022 "zoned": false, 00:21:45.022 "supported_io_types": { 00:21:45.022 "read": true, 00:21:45.022 "write": true, 00:21:45.022 "unmap": false, 00:21:45.022 "flush": false, 00:21:45.022 "reset": true, 00:21:45.022 "nvme_admin": false, 00:21:45.022 "nvme_io": false, 00:21:45.022 "nvme_io_md": false, 00:21:45.022 "write_zeroes": true, 00:21:45.022 "zcopy": false, 00:21:45.022 "get_zone_info": false, 00:21:45.022 "zone_management": false, 00:21:45.022 "zone_append": false, 00:21:45.022 "compare": false, 00:21:45.022 "compare_and_write": false, 00:21:45.022 "abort": false, 00:21:45.022 "seek_hole": false, 00:21:45.022 "seek_data": false, 00:21:45.022 "copy": false, 00:21:45.022 "nvme_iov_md": false 00:21:45.022 }, 00:21:45.022 "memory_domains": [ 00:21:45.022 { 00:21:45.022 "dma_device_id": "system", 00:21:45.022 "dma_device_type": 1 00:21:45.022 }, 00:21:45.022 { 00:21:45.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.022 "dma_device_type": 2 00:21:45.022 }, 00:21:45.022 { 00:21:45.022 "dma_device_id": "system", 00:21:45.022 "dma_device_type": 1 00:21:45.022 }, 00:21:45.022 { 00:21:45.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.022 "dma_device_type": 2 00:21:45.022 } 00:21:45.022 ], 00:21:45.023 "driver_specific": { 00:21:45.023 "raid": { 00:21:45.023 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:45.023 "strip_size_kb": 0, 00:21:45.023 "state": "online", 00:21:45.023 "raid_level": "raid1", 00:21:45.023 "superblock": true, 00:21:45.023 "num_base_bdevs": 2, 00:21:45.023 "num_base_bdevs_discovered": 2, 00:21:45.023 "num_base_bdevs_operational": 2, 00:21:45.023 "base_bdevs_list": [ 00:21:45.023 { 00:21:45.023 "name": "pt1", 00:21:45.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:45.023 "is_configured": true, 00:21:45.023 "data_offset": 256, 00:21:45.023 "data_size": 7936 00:21:45.023 }, 00:21:45.023 { 00:21:45.023 "name": "pt2", 00:21:45.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.023 "is_configured": true, 00:21:45.023 "data_offset": 256, 00:21:45.023 "data_size": 7936 00:21:45.023 } 00:21:45.023 ] 00:21:45.023 } 00:21:45.023 } 00:21:45.023 }' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:45.023 pt2' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.023 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:45.282 [2024-11-20 11:33:52.876331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 0da11421-2537-4151-a58c-1ec68c233741 '!=' 0da11421-2537-4151-a58c-1ec68c233741 ']' 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.282 [2024-11-20 11:33:52.948102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.282 11:33:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.282 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.282 "name": "raid_bdev1", 00:21:45.282 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:45.282 "strip_size_kb": 0, 00:21:45.282 "state": "online", 00:21:45.282 "raid_level": "raid1", 00:21:45.282 "superblock": true, 00:21:45.282 "num_base_bdevs": 2, 00:21:45.282 "num_base_bdevs_discovered": 1, 00:21:45.282 "num_base_bdevs_operational": 1, 00:21:45.282 "base_bdevs_list": [ 00:21:45.282 { 00:21:45.282 "name": null, 00:21:45.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.282 "is_configured": false, 00:21:45.282 "data_offset": 0, 00:21:45.282 "data_size": 7936 00:21:45.282 }, 00:21:45.282 { 00:21:45.282 "name": "pt2", 00:21:45.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.282 "is_configured": true, 00:21:45.282 "data_offset": 256, 00:21:45.282 "data_size": 7936 00:21:45.282 } 00:21:45.282 ] 00:21:45.282 }' 00:21:45.282 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.282 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.849 [2024-11-20 11:33:53.476420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.849 [2024-11-20 11:33:53.476455] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.849 [2024-11-20 11:33:53.476554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.849 [2024-11-20 11:33:53.476643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.849 [2024-11-20 11:33:53.476666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:45.849 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.850 [2024-11-20 11:33:53.548449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:45.850 [2024-11-20 11:33:53.548541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.850 [2024-11-20 11:33:53.548571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:45.850 [2024-11-20 11:33:53.548600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.850 [2024-11-20 11:33:53.552142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.850 [2024-11-20 11:33:53.552192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:45.850 [2024-11-20 11:33:53.552304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:45.850 [2024-11-20 11:33:53.552372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:45.850 [2024-11-20 11:33:53.552502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:45.850 [2024-11-20 11:33:53.552525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:45.850 [2024-11-20 11:33:53.552831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:45.850 [2024-11-20 11:33:53.553025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:45.850 [2024-11-20 11:33:53.553040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:45.850 [2024-11-20 11:33:53.553270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.850 pt2 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.850 "name": "raid_bdev1", 00:21:45.850 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:45.850 "strip_size_kb": 0, 00:21:45.850 "state": "online", 00:21:45.850 "raid_level": "raid1", 00:21:45.850 "superblock": true, 00:21:45.850 "num_base_bdevs": 2, 00:21:45.850 "num_base_bdevs_discovered": 1, 00:21:45.850 "num_base_bdevs_operational": 1, 00:21:45.850 "base_bdevs_list": [ 00:21:45.850 { 00:21:45.850 "name": null, 00:21:45.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.850 "is_configured": false, 00:21:45.850 "data_offset": 256, 00:21:45.850 "data_size": 7936 00:21:45.850 }, 00:21:45.850 { 00:21:45.850 "name": "pt2", 00:21:45.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.850 "is_configured": true, 00:21:45.850 "data_offset": 256, 00:21:45.850 "data_size": 7936 00:21:45.850 } 00:21:45.850 ] 00:21:45.850 }' 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.850 11:33:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 [2024-11-20 11:33:54.108768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.418 [2024-11-20 11:33:54.108808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.418 [2024-11-20 11:33:54.108902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.418 [2024-11-20 11:33:54.108974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.418 [2024-11-20 11:33:54.108991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 [2024-11-20 11:33:54.180850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:46.418 [2024-11-20 11:33:54.180942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.418 [2024-11-20 11:33:54.180975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:46.418 [2024-11-20 11:33:54.180990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.418 [2024-11-20 11:33:54.183966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.418 [2024-11-20 11:33:54.184014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:46.418 [2024-11-20 11:33:54.184132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:46.418 [2024-11-20 11:33:54.184194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:46.418 [2024-11-20 11:33:54.184367] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:46.418 [2024-11-20 11:33:54.184385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.418 [2024-11-20 11:33:54.184407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:46.418 [2024-11-20 11:33:54.184488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:46.418 [2024-11-20 11:33:54.184600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:46.418 [2024-11-20 11:33:54.184632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:46.418 [2024-11-20 11:33:54.184979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:46.418 [2024-11-20 11:33:54.185171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:46.418 [2024-11-20 11:33:54.185191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:46.418 [2024-11-20 11:33:54.185423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.418 pt1 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.418 "name": "raid_bdev1", 00:21:46.418 "uuid": "0da11421-2537-4151-a58c-1ec68c233741", 00:21:46.418 "strip_size_kb": 0, 00:21:46.418 "state": "online", 00:21:46.418 "raid_level": "raid1", 00:21:46.418 "superblock": true, 00:21:46.418 "num_base_bdevs": 2, 00:21:46.418 "num_base_bdevs_discovered": 1, 00:21:46.418 "num_base_bdevs_operational": 1, 00:21:46.418 "base_bdevs_list": [ 00:21:46.418 { 00:21:46.418 "name": null, 00:21:46.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.418 "is_configured": false, 00:21:46.418 "data_offset": 256, 00:21:46.418 "data_size": 7936 00:21:46.418 }, 00:21:46.418 { 00:21:46.418 "name": "pt2", 00:21:46.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.418 "is_configured": true, 00:21:46.418 "data_offset": 256, 00:21:46.418 "data_size": 7936 00:21:46.418 } 00:21:46.418 ] 00:21:46.418 }' 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.418 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:46.986 [2024-11-20 11:33:54.761805] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 0da11421-2537-4151-a58c-1ec68c233741 '!=' 0da11421-2537-4151-a58c-1ec68c233741 ']' 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86523 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86523 ']' 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86523 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.986 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86523 00:21:47.244 killing process with pid 86523 00:21:47.244 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.244 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.244 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86523' 00:21:47.244 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86523 00:21:47.244 [2024-11-20 11:33:54.837369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:47.244 11:33:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86523 00:21:47.244 [2024-11-20 11:33:54.837510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:47.245 [2024-11-20 11:33:54.837586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:47.245 [2024-11-20 11:33:54.837609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:47.245 [2024-11-20 11:33:55.029570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:48.619 11:33:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:48.619 00:21:48.619 real 0m6.709s 00:21:48.619 user 0m10.592s 00:21:48.619 sys 0m0.996s 00:21:48.619 11:33:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.619 11:33:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.619 ************************************ 00:21:48.619 END TEST raid_superblock_test_4k 00:21:48.619 ************************************ 00:21:48.620 11:33:56 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:21:48.620 11:33:56 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:48.620 11:33:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:48.620 11:33:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.620 11:33:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.620 ************************************ 00:21:48.620 START TEST raid_rebuild_test_sb_4k 00:21:48.620 ************************************ 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86856 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:48.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86856 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86856 ']' 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.620 11:33:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.620 [2024-11-20 11:33:56.261799] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:21:48.620 [2024-11-20 11:33:56.262291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86856 ] 00:21:48.620 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:48.620 Zero copy mechanism will not be used. 00:21:48.620 [2024-11-20 11:33:56.451285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.879 [2024-11-20 11:33:56.621559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.138 [2024-11-20 11:33:56.848524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.138 [2024-11-20 11:33:56.848599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.706 BaseBdev1_malloc 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.706 [2024-11-20 11:33:57.358009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:49.706 [2024-11-20 11:33:57.358089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.706 [2024-11-20 11:33:57.358155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:49.706 [2024-11-20 11:33:57.358189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.706 [2024-11-20 11:33:57.361019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.706 [2024-11-20 11:33:57.361071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:49.706 BaseBdev1 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.706 BaseBdev2_malloc 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.706 [2024-11-20 11:33:57.410725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:49.706 [2024-11-20 11:33:57.410801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.706 [2024-11-20 11:33:57.410830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:49.706 [2024-11-20 11:33:57.410850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.706 [2024-11-20 11:33:57.413702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.706 [2024-11-20 11:33:57.413751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:49.706 BaseBdev2 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.706 spare_malloc 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.706 spare_delay 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.706 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.706 [2024-11-20 11:33:57.492961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:49.707 [2024-11-20 11:33:57.493070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.707 [2024-11-20 11:33:57.493117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:49.707 [2024-11-20 11:33:57.493143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.707 [2024-11-20 11:33:57.496124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.707 [2024-11-20 11:33:57.496176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:49.707 spare 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.707 [2024-11-20 11:33:57.505146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.707 [2024-11-20 11:33:57.507767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.707 [2024-11-20 11:33:57.508050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:49.707 [2024-11-20 11:33:57.508076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:49.707 [2024-11-20 11:33:57.508405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:49.707 [2024-11-20 11:33:57.508647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:49.707 [2024-11-20 11:33:57.508680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:49.707 [2024-11-20 11:33:57.508948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.707 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.965 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.965 "name": "raid_bdev1", 00:21:49.965 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:49.965 "strip_size_kb": 0, 00:21:49.965 "state": "online", 00:21:49.965 "raid_level": "raid1", 00:21:49.965 "superblock": true, 00:21:49.965 "num_base_bdevs": 2, 00:21:49.965 "num_base_bdevs_discovered": 2, 00:21:49.965 "num_base_bdevs_operational": 2, 00:21:49.965 "base_bdevs_list": [ 00:21:49.965 { 00:21:49.965 "name": "BaseBdev1", 00:21:49.965 "uuid": "ba0b197a-c3f1-5c3d-9bd4-a3f2609a21e4", 00:21:49.965 "is_configured": true, 00:21:49.965 "data_offset": 256, 00:21:49.965 "data_size": 7936 00:21:49.965 }, 00:21:49.965 { 00:21:49.965 "name": "BaseBdev2", 00:21:49.965 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:49.965 "is_configured": true, 00:21:49.965 "data_offset": 256, 00:21:49.965 "data_size": 7936 00:21:49.965 } 00:21:49.965 ] 00:21:49.965 }' 00:21:49.965 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.965 11:33:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.224 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:50.224 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.224 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.224 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:50.224 [2024-11-20 11:33:58.045547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.224 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:50.483 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:50.742 [2024-11-20 11:33:58.453370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:50.742 /dev/nbd0 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:50.742 1+0 records in 00:21:50.742 1+0 records out 00:21:50.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470269 s, 8.7 MB/s 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:50.742 11:33:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:51.675 7936+0 records in 00:21:51.675 7936+0 records out 00:21:51.675 32505856 bytes (33 MB, 31 MiB) copied, 0.986084 s, 33.0 MB/s 00:21:51.675 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:51.675 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:51.675 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:51.675 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:51.675 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:51.675 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:51.675 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:52.242 [2024-11-20 11:33:59.872305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.242 [2024-11-20 11:33:59.888442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.242 "name": "raid_bdev1", 00:21:52.242 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:52.242 "strip_size_kb": 0, 00:21:52.242 "state": "online", 00:21:52.242 "raid_level": "raid1", 00:21:52.242 "superblock": true, 00:21:52.242 "num_base_bdevs": 2, 00:21:52.242 "num_base_bdevs_discovered": 1, 00:21:52.242 "num_base_bdevs_operational": 1, 00:21:52.242 "base_bdevs_list": [ 00:21:52.242 { 00:21:52.242 "name": null, 00:21:52.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.242 "is_configured": false, 00:21:52.242 "data_offset": 0, 00:21:52.242 "data_size": 7936 00:21:52.242 }, 00:21:52.242 { 00:21:52.242 "name": "BaseBdev2", 00:21:52.242 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:52.242 "is_configured": true, 00:21:52.242 "data_offset": 256, 00:21:52.242 "data_size": 7936 00:21:52.242 } 00:21:52.242 ] 00:21:52.242 }' 00:21:52.242 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.243 11:33:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.810 11:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.810 11:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.810 11:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.810 [2024-11-20 11:34:00.408602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.810 [2024-11-20 11:34:00.425778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:52.810 11:34:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.810 11:34:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:52.810 [2024-11-20 11:34:00.428506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.748 "name": "raid_bdev1", 00:21:53.748 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:53.748 "strip_size_kb": 0, 00:21:53.748 "state": "online", 00:21:53.748 "raid_level": "raid1", 00:21:53.748 "superblock": true, 00:21:53.748 "num_base_bdevs": 2, 00:21:53.748 "num_base_bdevs_discovered": 2, 00:21:53.748 "num_base_bdevs_operational": 2, 00:21:53.748 "process": { 00:21:53.748 "type": "rebuild", 00:21:53.748 "target": "spare", 00:21:53.748 "progress": { 00:21:53.748 "blocks": 2560, 00:21:53.748 "percent": 32 00:21:53.748 } 00:21:53.748 }, 00:21:53.748 "base_bdevs_list": [ 00:21:53.748 { 00:21:53.748 "name": "spare", 00:21:53.748 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:21:53.748 "is_configured": true, 00:21:53.748 "data_offset": 256, 00:21:53.748 "data_size": 7936 00:21:53.748 }, 00:21:53.748 { 00:21:53.748 "name": "BaseBdev2", 00:21:53.748 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:53.748 "is_configured": true, 00:21:53.748 "data_offset": 256, 00:21:53.748 "data_size": 7936 00:21:53.748 } 00:21:53.748 ] 00:21:53.748 }' 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.748 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.008 [2024-11-20 11:34:01.594150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:54.008 [2024-11-20 11:34:01.638181] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:54.008 [2024-11-20 11:34:01.638286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.008 [2024-11-20 11:34:01.638310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:54.008 [2024-11-20 11:34:01.638326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.008 "name": "raid_bdev1", 00:21:54.008 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:54.008 "strip_size_kb": 0, 00:21:54.008 "state": "online", 00:21:54.008 "raid_level": "raid1", 00:21:54.008 "superblock": true, 00:21:54.008 "num_base_bdevs": 2, 00:21:54.008 "num_base_bdevs_discovered": 1, 00:21:54.008 "num_base_bdevs_operational": 1, 00:21:54.008 "base_bdevs_list": [ 00:21:54.008 { 00:21:54.008 "name": null, 00:21:54.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.008 "is_configured": false, 00:21:54.008 "data_offset": 0, 00:21:54.008 "data_size": 7936 00:21:54.008 }, 00:21:54.008 { 00:21:54.008 "name": "BaseBdev2", 00:21:54.008 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:54.008 "is_configured": true, 00:21:54.008 "data_offset": 256, 00:21:54.008 "data_size": 7936 00:21:54.008 } 00:21:54.008 ] 00:21:54.008 }' 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.008 11:34:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.576 "name": "raid_bdev1", 00:21:54.576 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:54.576 "strip_size_kb": 0, 00:21:54.576 "state": "online", 00:21:54.576 "raid_level": "raid1", 00:21:54.576 "superblock": true, 00:21:54.576 "num_base_bdevs": 2, 00:21:54.576 "num_base_bdevs_discovered": 1, 00:21:54.576 "num_base_bdevs_operational": 1, 00:21:54.576 "base_bdevs_list": [ 00:21:54.576 { 00:21:54.576 "name": null, 00:21:54.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.576 "is_configured": false, 00:21:54.576 "data_offset": 0, 00:21:54.576 "data_size": 7936 00:21:54.576 }, 00:21:54.576 { 00:21:54.576 "name": "BaseBdev2", 00:21:54.576 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:54.576 "is_configured": true, 00:21:54.576 "data_offset": 256, 00:21:54.576 "data_size": 7936 00:21:54.576 } 00:21:54.576 ] 00:21:54.576 }' 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.576 [2024-11-20 11:34:02.363135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:54.576 [2024-11-20 11:34:02.379759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.576 11:34:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:54.576 [2024-11-20 11:34:02.382539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.953 "name": "raid_bdev1", 00:21:55.953 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:55.953 "strip_size_kb": 0, 00:21:55.953 "state": "online", 00:21:55.953 "raid_level": "raid1", 00:21:55.953 "superblock": true, 00:21:55.953 "num_base_bdevs": 2, 00:21:55.953 "num_base_bdevs_discovered": 2, 00:21:55.953 "num_base_bdevs_operational": 2, 00:21:55.953 "process": { 00:21:55.953 "type": "rebuild", 00:21:55.953 "target": "spare", 00:21:55.953 "progress": { 00:21:55.953 "blocks": 2560, 00:21:55.953 "percent": 32 00:21:55.953 } 00:21:55.953 }, 00:21:55.953 "base_bdevs_list": [ 00:21:55.953 { 00:21:55.953 "name": "spare", 00:21:55.953 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:21:55.953 "is_configured": true, 00:21:55.953 "data_offset": 256, 00:21:55.953 "data_size": 7936 00:21:55.953 }, 00:21:55.953 { 00:21:55.953 "name": "BaseBdev2", 00:21:55.953 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:55.953 "is_configured": true, 00:21:55.953 "data_offset": 256, 00:21:55.953 "data_size": 7936 00:21:55.953 } 00:21:55.953 ] 00:21:55.953 }' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:55.953 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=733 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.953 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.953 "name": "raid_bdev1", 00:21:55.953 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:55.953 "strip_size_kb": 0, 00:21:55.953 "state": "online", 00:21:55.953 "raid_level": "raid1", 00:21:55.953 "superblock": true, 00:21:55.953 "num_base_bdevs": 2, 00:21:55.954 "num_base_bdevs_discovered": 2, 00:21:55.954 "num_base_bdevs_operational": 2, 00:21:55.954 "process": { 00:21:55.954 "type": "rebuild", 00:21:55.954 "target": "spare", 00:21:55.954 "progress": { 00:21:55.954 "blocks": 2816, 00:21:55.954 "percent": 35 00:21:55.954 } 00:21:55.954 }, 00:21:55.954 "base_bdevs_list": [ 00:21:55.954 { 00:21:55.954 "name": "spare", 00:21:55.954 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:21:55.954 "is_configured": true, 00:21:55.954 "data_offset": 256, 00:21:55.954 "data_size": 7936 00:21:55.954 }, 00:21:55.954 { 00:21:55.954 "name": "BaseBdev2", 00:21:55.954 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:55.954 "is_configured": true, 00:21:55.954 "data_offset": 256, 00:21:55.954 "data_size": 7936 00:21:55.954 } 00:21:55.954 ] 00:21:55.954 }' 00:21:55.954 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.954 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.954 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.954 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.954 11:34:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.889 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.147 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.147 "name": "raid_bdev1", 00:21:57.147 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:57.147 "strip_size_kb": 0, 00:21:57.147 "state": "online", 00:21:57.147 "raid_level": "raid1", 00:21:57.147 "superblock": true, 00:21:57.147 "num_base_bdevs": 2, 00:21:57.147 "num_base_bdevs_discovered": 2, 00:21:57.147 "num_base_bdevs_operational": 2, 00:21:57.147 "process": { 00:21:57.147 "type": "rebuild", 00:21:57.147 "target": "spare", 00:21:57.147 "progress": { 00:21:57.147 "blocks": 5888, 00:21:57.147 "percent": 74 00:21:57.147 } 00:21:57.147 }, 00:21:57.147 "base_bdevs_list": [ 00:21:57.147 { 00:21:57.147 "name": "spare", 00:21:57.147 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:21:57.147 "is_configured": true, 00:21:57.147 "data_offset": 256, 00:21:57.147 "data_size": 7936 00:21:57.147 }, 00:21:57.147 { 00:21:57.147 "name": "BaseBdev2", 00:21:57.147 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:57.147 "is_configured": true, 00:21:57.147 "data_offset": 256, 00:21:57.147 "data_size": 7936 00:21:57.147 } 00:21:57.147 ] 00:21:57.147 }' 00:21:57.147 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.147 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.147 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.147 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.147 11:34:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:57.714 [2024-11-20 11:34:05.506467] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:57.714 [2024-11-20 11:34:05.506590] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:57.714 [2024-11-20 11:34:05.506794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.282 "name": "raid_bdev1", 00:21:58.282 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:58.282 "strip_size_kb": 0, 00:21:58.282 "state": "online", 00:21:58.282 "raid_level": "raid1", 00:21:58.282 "superblock": true, 00:21:58.282 "num_base_bdevs": 2, 00:21:58.282 "num_base_bdevs_discovered": 2, 00:21:58.282 "num_base_bdevs_operational": 2, 00:21:58.282 "base_bdevs_list": [ 00:21:58.282 { 00:21:58.282 "name": "spare", 00:21:58.282 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:21:58.282 "is_configured": true, 00:21:58.282 "data_offset": 256, 00:21:58.282 "data_size": 7936 00:21:58.282 }, 00:21:58.282 { 00:21:58.282 "name": "BaseBdev2", 00:21:58.282 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:58.282 "is_configured": true, 00:21:58.282 "data_offset": 256, 00:21:58.282 "data_size": 7936 00:21:58.282 } 00:21:58.282 ] 00:21:58.282 }' 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:58.282 11:34:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.282 "name": "raid_bdev1", 00:21:58.282 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:58.282 "strip_size_kb": 0, 00:21:58.282 "state": "online", 00:21:58.282 "raid_level": "raid1", 00:21:58.282 "superblock": true, 00:21:58.282 "num_base_bdevs": 2, 00:21:58.282 "num_base_bdevs_discovered": 2, 00:21:58.282 "num_base_bdevs_operational": 2, 00:21:58.282 "base_bdevs_list": [ 00:21:58.282 { 00:21:58.282 "name": "spare", 00:21:58.282 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:21:58.282 "is_configured": true, 00:21:58.282 "data_offset": 256, 00:21:58.282 "data_size": 7936 00:21:58.282 }, 00:21:58.282 { 00:21:58.282 "name": "BaseBdev2", 00:21:58.282 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:58.282 "is_configured": true, 00:21:58.282 "data_offset": 256, 00:21:58.282 "data_size": 7936 00:21:58.282 } 00:21:58.282 ] 00:21:58.282 }' 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.282 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.542 "name": "raid_bdev1", 00:21:58.542 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:21:58.542 "strip_size_kb": 0, 00:21:58.542 "state": "online", 00:21:58.542 "raid_level": "raid1", 00:21:58.542 "superblock": true, 00:21:58.542 "num_base_bdevs": 2, 00:21:58.542 "num_base_bdevs_discovered": 2, 00:21:58.542 "num_base_bdevs_operational": 2, 00:21:58.542 "base_bdevs_list": [ 00:21:58.542 { 00:21:58.542 "name": "spare", 00:21:58.542 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:21:58.542 "is_configured": true, 00:21:58.542 "data_offset": 256, 00:21:58.542 "data_size": 7936 00:21:58.542 }, 00:21:58.542 { 00:21:58.542 "name": "BaseBdev2", 00:21:58.542 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:21:58.542 "is_configured": true, 00:21:58.542 "data_offset": 256, 00:21:58.542 "data_size": 7936 00:21:58.542 } 00:21:58.542 ] 00:21:58.542 }' 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.542 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.110 [2024-11-20 11:34:06.707400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.110 [2024-11-20 11:34:06.707592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.110 [2024-11-20 11:34:06.707725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.110 [2024-11-20 11:34:06.707819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.110 [2024-11-20 11:34:06.707845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:59.110 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:59.111 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:59.111 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:59.111 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:59.111 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:59.111 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:59.111 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:59.111 11:34:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:59.370 /dev/nbd0 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.370 1+0 records in 00:21:59.370 1+0 records out 00:21:59.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301669 s, 13.6 MB/s 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:59.370 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:59.636 /dev/nbd1 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:59.636 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.895 1+0 records in 00:21:59.895 1+0 records out 00:21:59.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372138 s, 11.0 MB/s 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:59.895 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.154 11:34:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:00.722 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.723 [2024-11-20 11:34:08.299767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:00.723 [2024-11-20 11:34:08.299837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.723 [2024-11-20 11:34:08.299872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:00.723 [2024-11-20 11:34:08.299888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.723 [2024-11-20 11:34:08.302862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.723 [2024-11-20 11:34:08.302908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:00.723 [2024-11-20 11:34:08.303032] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:00.723 [2024-11-20 11:34:08.303103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:00.723 [2024-11-20 11:34:08.303299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.723 spare 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.723 [2024-11-20 11:34:08.403451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:00.723 [2024-11-20 11:34:08.403502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:00.723 [2024-11-20 11:34:08.404000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:00.723 [2024-11-20 11:34:08.404283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:00.723 [2024-11-20 11:34:08.404302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:00.723 [2024-11-20 11:34:08.404569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.723 "name": "raid_bdev1", 00:22:00.723 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:00.723 "strip_size_kb": 0, 00:22:00.723 "state": "online", 00:22:00.723 "raid_level": "raid1", 00:22:00.723 "superblock": true, 00:22:00.723 "num_base_bdevs": 2, 00:22:00.723 "num_base_bdevs_discovered": 2, 00:22:00.723 "num_base_bdevs_operational": 2, 00:22:00.723 "base_bdevs_list": [ 00:22:00.723 { 00:22:00.723 "name": "spare", 00:22:00.723 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:22:00.723 "is_configured": true, 00:22:00.723 "data_offset": 256, 00:22:00.723 "data_size": 7936 00:22:00.723 }, 00:22:00.723 { 00:22:00.723 "name": "BaseBdev2", 00:22:00.723 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:00.723 "is_configured": true, 00:22:00.723 "data_offset": 256, 00:22:00.723 "data_size": 7936 00:22:00.723 } 00:22:00.723 ] 00:22:00.723 }' 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.723 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.291 "name": "raid_bdev1", 00:22:01.291 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:01.291 "strip_size_kb": 0, 00:22:01.291 "state": "online", 00:22:01.291 "raid_level": "raid1", 00:22:01.291 "superblock": true, 00:22:01.291 "num_base_bdevs": 2, 00:22:01.291 "num_base_bdevs_discovered": 2, 00:22:01.291 "num_base_bdevs_operational": 2, 00:22:01.291 "base_bdevs_list": [ 00:22:01.291 { 00:22:01.291 "name": "spare", 00:22:01.291 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:22:01.291 "is_configured": true, 00:22:01.291 "data_offset": 256, 00:22:01.291 "data_size": 7936 00:22:01.291 }, 00:22:01.291 { 00:22:01.291 "name": "BaseBdev2", 00:22:01.291 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:01.291 "is_configured": true, 00:22:01.291 "data_offset": 256, 00:22:01.291 "data_size": 7936 00:22:01.291 } 00:22:01.291 ] 00:22:01.291 }' 00:22:01.291 11:34:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:01.291 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.550 [2024-11-20 11:34:09.152769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.550 "name": "raid_bdev1", 00:22:01.550 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:01.550 "strip_size_kb": 0, 00:22:01.550 "state": "online", 00:22:01.550 "raid_level": "raid1", 00:22:01.550 "superblock": true, 00:22:01.550 "num_base_bdevs": 2, 00:22:01.550 "num_base_bdevs_discovered": 1, 00:22:01.550 "num_base_bdevs_operational": 1, 00:22:01.550 "base_bdevs_list": [ 00:22:01.550 { 00:22:01.550 "name": null, 00:22:01.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.550 "is_configured": false, 00:22:01.550 "data_offset": 0, 00:22:01.550 "data_size": 7936 00:22:01.550 }, 00:22:01.550 { 00:22:01.550 "name": "BaseBdev2", 00:22:01.550 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:01.550 "is_configured": true, 00:22:01.550 "data_offset": 256, 00:22:01.550 "data_size": 7936 00:22:01.550 } 00:22:01.550 ] 00:22:01.550 }' 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.550 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.117 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:02.117 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.117 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.117 [2024-11-20 11:34:09.672992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:02.117 [2024-11-20 11:34:09.673516] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:02.117 [2024-11-20 11:34:09.673552] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:02.117 [2024-11-20 11:34:09.673606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:02.117 [2024-11-20 11:34:09.690154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:02.117 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.117 11:34:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:02.117 [2024-11-20 11:34:09.692903] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.053 "name": "raid_bdev1", 00:22:03.053 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:03.053 "strip_size_kb": 0, 00:22:03.053 "state": "online", 00:22:03.053 "raid_level": "raid1", 00:22:03.053 "superblock": true, 00:22:03.053 "num_base_bdevs": 2, 00:22:03.053 "num_base_bdevs_discovered": 2, 00:22:03.053 "num_base_bdevs_operational": 2, 00:22:03.053 "process": { 00:22:03.053 "type": "rebuild", 00:22:03.053 "target": "spare", 00:22:03.053 "progress": { 00:22:03.053 "blocks": 2560, 00:22:03.053 "percent": 32 00:22:03.053 } 00:22:03.053 }, 00:22:03.053 "base_bdevs_list": [ 00:22:03.053 { 00:22:03.053 "name": "spare", 00:22:03.053 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:22:03.053 "is_configured": true, 00:22:03.053 "data_offset": 256, 00:22:03.053 "data_size": 7936 00:22:03.053 }, 00:22:03.053 { 00:22:03.053 "name": "BaseBdev2", 00:22:03.053 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:03.053 "is_configured": true, 00:22:03.053 "data_offset": 256, 00:22:03.053 "data_size": 7936 00:22:03.053 } 00:22:03.053 ] 00:22:03.053 }' 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.053 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.053 [2024-11-20 11:34:10.854588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.312 [2024-11-20 11:34:10.902512] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:03.312 [2024-11-20 11:34:10.902649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.312 [2024-11-20 11:34:10.902680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.312 [2024-11-20 11:34:10.902696] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.312 "name": "raid_bdev1", 00:22:03.312 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:03.312 "strip_size_kb": 0, 00:22:03.312 "state": "online", 00:22:03.312 "raid_level": "raid1", 00:22:03.312 "superblock": true, 00:22:03.312 "num_base_bdevs": 2, 00:22:03.312 "num_base_bdevs_discovered": 1, 00:22:03.312 "num_base_bdevs_operational": 1, 00:22:03.312 "base_bdevs_list": [ 00:22:03.312 { 00:22:03.312 "name": null, 00:22:03.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.312 "is_configured": false, 00:22:03.312 "data_offset": 0, 00:22:03.312 "data_size": 7936 00:22:03.312 }, 00:22:03.312 { 00:22:03.312 "name": "BaseBdev2", 00:22:03.312 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:03.312 "is_configured": true, 00:22:03.312 "data_offset": 256, 00:22:03.312 "data_size": 7936 00:22:03.312 } 00:22:03.312 ] 00:22:03.312 }' 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.312 11:34:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.880 11:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:03.880 11:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.880 11:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.880 [2024-11-20 11:34:11.443563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:03.880 [2024-11-20 11:34:11.443670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.880 [2024-11-20 11:34:11.443705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:03.880 [2024-11-20 11:34:11.443723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.880 [2024-11-20 11:34:11.444329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.880 [2024-11-20 11:34:11.444377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:03.880 [2024-11-20 11:34:11.444537] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:03.880 [2024-11-20 11:34:11.444564] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:03.880 [2024-11-20 11:34:11.444578] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:03.880 [2024-11-20 11:34:11.444634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:03.880 [2024-11-20 11:34:11.460546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:03.880 spare 00:22:03.880 11:34:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.880 11:34:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:03.880 [2024-11-20 11:34:11.463189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.817 "name": "raid_bdev1", 00:22:04.817 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:04.817 "strip_size_kb": 0, 00:22:04.817 "state": "online", 00:22:04.817 "raid_level": "raid1", 00:22:04.817 "superblock": true, 00:22:04.817 "num_base_bdevs": 2, 00:22:04.817 "num_base_bdevs_discovered": 2, 00:22:04.817 "num_base_bdevs_operational": 2, 00:22:04.817 "process": { 00:22:04.817 "type": "rebuild", 00:22:04.817 "target": "spare", 00:22:04.817 "progress": { 00:22:04.817 "blocks": 2560, 00:22:04.817 "percent": 32 00:22:04.817 } 00:22:04.817 }, 00:22:04.817 "base_bdevs_list": [ 00:22:04.817 { 00:22:04.817 "name": "spare", 00:22:04.817 "uuid": "42abf1f0-7340-5e03-8680-6782f8914ef6", 00:22:04.817 "is_configured": true, 00:22:04.817 "data_offset": 256, 00:22:04.817 "data_size": 7936 00:22:04.817 }, 00:22:04.817 { 00:22:04.817 "name": "BaseBdev2", 00:22:04.817 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:04.817 "is_configured": true, 00:22:04.817 "data_offset": 256, 00:22:04.817 "data_size": 7936 00:22:04.817 } 00:22:04.817 ] 00:22:04.817 }' 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:04.817 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:04.818 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.818 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.818 [2024-11-20 11:34:12.636723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.079 [2024-11-20 11:34:12.672674] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:05.079 [2024-11-20 11:34:12.673020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.079 [2024-11-20 11:34:12.673171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.079 [2024-11-20 11:34:12.673223] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.079 "name": "raid_bdev1", 00:22:05.079 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:05.079 "strip_size_kb": 0, 00:22:05.079 "state": "online", 00:22:05.079 "raid_level": "raid1", 00:22:05.079 "superblock": true, 00:22:05.079 "num_base_bdevs": 2, 00:22:05.079 "num_base_bdevs_discovered": 1, 00:22:05.079 "num_base_bdevs_operational": 1, 00:22:05.079 "base_bdevs_list": [ 00:22:05.079 { 00:22:05.079 "name": null, 00:22:05.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.079 "is_configured": false, 00:22:05.079 "data_offset": 0, 00:22:05.079 "data_size": 7936 00:22:05.079 }, 00:22:05.079 { 00:22:05.079 "name": "BaseBdev2", 00:22:05.079 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:05.079 "is_configured": true, 00:22:05.079 "data_offset": 256, 00:22:05.079 "data_size": 7936 00:22:05.079 } 00:22:05.079 ] 00:22:05.079 }' 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.079 11:34:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.647 "name": "raid_bdev1", 00:22:05.647 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:05.647 "strip_size_kb": 0, 00:22:05.647 "state": "online", 00:22:05.647 "raid_level": "raid1", 00:22:05.647 "superblock": true, 00:22:05.647 "num_base_bdevs": 2, 00:22:05.647 "num_base_bdevs_discovered": 1, 00:22:05.647 "num_base_bdevs_operational": 1, 00:22:05.647 "base_bdevs_list": [ 00:22:05.647 { 00:22:05.647 "name": null, 00:22:05.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.647 "is_configured": false, 00:22:05.647 "data_offset": 0, 00:22:05.647 "data_size": 7936 00:22:05.647 }, 00:22:05.647 { 00:22:05.647 "name": "BaseBdev2", 00:22:05.647 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:05.647 "is_configured": true, 00:22:05.647 "data_offset": 256, 00:22:05.647 "data_size": 7936 00:22:05.647 } 00:22:05.647 ] 00:22:05.647 }' 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:05.647 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.648 [2024-11-20 11:34:13.414509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:05.648 [2024-11-20 11:34:13.414575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:05.648 [2024-11-20 11:34:13.414609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:05.648 [2024-11-20 11:34:13.414649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:05.648 [2024-11-20 11:34:13.415251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:05.648 [2024-11-20 11:34:13.415283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:05.648 [2024-11-20 11:34:13.415389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:05.648 [2024-11-20 11:34:13.415423] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:05.648 [2024-11-20 11:34:13.415440] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:05.648 [2024-11-20 11:34:13.415457] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:05.648 BaseBdev1 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.648 11:34:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.585 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.844 "name": "raid_bdev1", 00:22:06.844 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:06.844 "strip_size_kb": 0, 00:22:06.844 "state": "online", 00:22:06.844 "raid_level": "raid1", 00:22:06.844 "superblock": true, 00:22:06.844 "num_base_bdevs": 2, 00:22:06.844 "num_base_bdevs_discovered": 1, 00:22:06.844 "num_base_bdevs_operational": 1, 00:22:06.844 "base_bdevs_list": [ 00:22:06.844 { 00:22:06.844 "name": null, 00:22:06.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.844 "is_configured": false, 00:22:06.844 "data_offset": 0, 00:22:06.844 "data_size": 7936 00:22:06.844 }, 00:22:06.844 { 00:22:06.844 "name": "BaseBdev2", 00:22:06.844 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:06.844 "is_configured": true, 00:22:06.844 "data_offset": 256, 00:22:06.844 "data_size": 7936 00:22:06.844 } 00:22:06.844 ] 00:22:06.844 }' 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.844 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.413 11:34:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.413 "name": "raid_bdev1", 00:22:07.413 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:07.413 "strip_size_kb": 0, 00:22:07.413 "state": "online", 00:22:07.413 "raid_level": "raid1", 00:22:07.413 "superblock": true, 00:22:07.413 "num_base_bdevs": 2, 00:22:07.413 "num_base_bdevs_discovered": 1, 00:22:07.413 "num_base_bdevs_operational": 1, 00:22:07.413 "base_bdevs_list": [ 00:22:07.413 { 00:22:07.413 "name": null, 00:22:07.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.413 "is_configured": false, 00:22:07.413 "data_offset": 0, 00:22:07.413 "data_size": 7936 00:22:07.413 }, 00:22:07.413 { 00:22:07.413 "name": "BaseBdev2", 00:22:07.413 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:07.413 "is_configured": true, 00:22:07.413 "data_offset": 256, 00:22:07.413 "data_size": 7936 00:22:07.413 } 00:22:07.413 ] 00:22:07.413 }' 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:07.413 [2024-11-20 11:34:15.119136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:07.413 [2024-11-20 11:34:15.119340] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:07.413 [2024-11-20 11:34:15.119364] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:07.413 request: 00:22:07.413 { 00:22:07.413 "base_bdev": "BaseBdev1", 00:22:07.413 "raid_bdev": "raid_bdev1", 00:22:07.413 "method": "bdev_raid_add_base_bdev", 00:22:07.413 "req_id": 1 00:22:07.413 } 00:22:07.413 Got JSON-RPC error response 00:22:07.413 response: 00:22:07.413 { 00:22:07.413 "code": -22, 00:22:07.413 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:07.413 } 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.413 11:34:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.348 "name": "raid_bdev1", 00:22:08.348 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:08.348 "strip_size_kb": 0, 00:22:08.348 "state": "online", 00:22:08.348 "raid_level": "raid1", 00:22:08.348 "superblock": true, 00:22:08.348 "num_base_bdevs": 2, 00:22:08.348 "num_base_bdevs_discovered": 1, 00:22:08.348 "num_base_bdevs_operational": 1, 00:22:08.348 "base_bdevs_list": [ 00:22:08.348 { 00:22:08.348 "name": null, 00:22:08.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.348 "is_configured": false, 00:22:08.348 "data_offset": 0, 00:22:08.348 "data_size": 7936 00:22:08.348 }, 00:22:08.348 { 00:22:08.348 "name": "BaseBdev2", 00:22:08.348 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:08.348 "is_configured": true, 00:22:08.348 "data_offset": 256, 00:22:08.348 "data_size": 7936 00:22:08.348 } 00:22:08.348 ] 00:22:08.348 }' 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.348 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:08.916 "name": "raid_bdev1", 00:22:08.916 "uuid": "2fec5a76-6898-4fb5-b0d2-7a34d1c408ee", 00:22:08.916 "strip_size_kb": 0, 00:22:08.916 "state": "online", 00:22:08.916 "raid_level": "raid1", 00:22:08.916 "superblock": true, 00:22:08.916 "num_base_bdevs": 2, 00:22:08.916 "num_base_bdevs_discovered": 1, 00:22:08.916 "num_base_bdevs_operational": 1, 00:22:08.916 "base_bdevs_list": [ 00:22:08.916 { 00:22:08.916 "name": null, 00:22:08.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.916 "is_configured": false, 00:22:08.916 "data_offset": 0, 00:22:08.916 "data_size": 7936 00:22:08.916 }, 00:22:08.916 { 00:22:08.916 "name": "BaseBdev2", 00:22:08.916 "uuid": "a2720339-179c-542a-a8d9-b6a7c9f057c5", 00:22:08.916 "is_configured": true, 00:22:08.916 "data_offset": 256, 00:22:08.916 "data_size": 7936 00:22:08.916 } 00:22:08.916 ] 00:22:08.916 }' 00:22:08.916 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86856 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86856 ']' 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86856 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86856 00:22:09.174 killing process with pid 86856 00:22:09.174 Received shutdown signal, test time was about 60.000000 seconds 00:22:09.174 00:22:09.174 Latency(us) 00:22:09.174 [2024-11-20T11:34:17.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.174 [2024-11-20T11:34:17.020Z] =================================================================================================================== 00:22:09.174 [2024-11-20T11:34:17.020Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.174 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.175 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86856' 00:22:09.175 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86856 00:22:09.175 [2024-11-20 11:34:16.872180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.175 11:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86856 00:22:09.175 [2024-11-20 11:34:16.872336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.175 [2024-11-20 11:34:16.872403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.175 [2024-11-20 11:34:16.872432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:09.433 [2024-11-20 11:34:17.147436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:10.369 11:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:22:10.369 00:22:10.369 real 0m22.051s 00:22:10.369 user 0m29.963s 00:22:10.369 sys 0m2.670s 00:22:10.369 11:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.369 11:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:10.369 ************************************ 00:22:10.369 END TEST raid_rebuild_test_sb_4k 00:22:10.369 ************************************ 00:22:10.628 11:34:18 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:22:10.628 11:34:18 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:10.628 11:34:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:10.628 11:34:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.628 11:34:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.628 ************************************ 00:22:10.628 START TEST raid_state_function_test_sb_md_separate 00:22:10.628 ************************************ 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:10.628 Process raid pid: 87566 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87566 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87566' 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87566 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87566 ']' 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.628 11:34:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.628 [2024-11-20 11:34:18.366867] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:22:10.628 [2024-11-20 11:34:18.367295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.888 [2024-11-20 11:34:18.561089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.888 [2024-11-20 11:34:18.717902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.147 [2024-11-20 11:34:18.939602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:11.147 [2024-11-20 11:34:18.939876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.714 [2024-11-20 11:34:19.333250] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:11.714 [2024-11-20 11:34:19.333346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:11.714 [2024-11-20 11:34:19.333364] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:11.714 [2024-11-20 11:34:19.333381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.714 "name": "Existed_Raid", 00:22:11.714 "uuid": "578a5d5a-1e76-4299-a795-c3cb2c2290d3", 00:22:11.714 "strip_size_kb": 0, 00:22:11.714 "state": "configuring", 00:22:11.714 "raid_level": "raid1", 00:22:11.714 "superblock": true, 00:22:11.714 "num_base_bdevs": 2, 00:22:11.714 "num_base_bdevs_discovered": 0, 00:22:11.714 "num_base_bdevs_operational": 2, 00:22:11.714 "base_bdevs_list": [ 00:22:11.714 { 00:22:11.714 "name": "BaseBdev1", 00:22:11.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.714 "is_configured": false, 00:22:11.714 "data_offset": 0, 00:22:11.714 "data_size": 0 00:22:11.714 }, 00:22:11.714 { 00:22:11.714 "name": "BaseBdev2", 00:22:11.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.714 "is_configured": false, 00:22:11.714 "data_offset": 0, 00:22:11.714 "data_size": 0 00:22:11.714 } 00:22:11.714 ] 00:22:11.714 }' 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.714 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.046 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:12.046 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.046 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.046 [2024-11-20 11:34:19.857306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.046 [2024-11-20 11:34:19.857488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:12.046 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.046 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:12.046 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.046 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.046 [2024-11-20 11:34:19.869301] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:12.046 [2024-11-20 11:34:19.869495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:12.046 [2024-11-20 11:34:19.869655] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.046 [2024-11-20 11:34:19.869798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.320 [2024-11-20 11:34:19.916153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.320 BaseBdev1 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.320 [ 00:22:12.320 { 00:22:12.320 "name": "BaseBdev1", 00:22:12.320 "aliases": [ 00:22:12.320 "4fc93b03-e3fb-402c-aa84-cee30d0f6341" 00:22:12.320 ], 00:22:12.320 "product_name": "Malloc disk", 00:22:12.320 "block_size": 4096, 00:22:12.320 "num_blocks": 8192, 00:22:12.320 "uuid": "4fc93b03-e3fb-402c-aa84-cee30d0f6341", 00:22:12.320 "md_size": 32, 00:22:12.320 "md_interleave": false, 00:22:12.320 "dif_type": 0, 00:22:12.320 "assigned_rate_limits": { 00:22:12.320 "rw_ios_per_sec": 0, 00:22:12.320 "rw_mbytes_per_sec": 0, 00:22:12.320 "r_mbytes_per_sec": 0, 00:22:12.320 "w_mbytes_per_sec": 0 00:22:12.320 }, 00:22:12.320 "claimed": true, 00:22:12.320 "claim_type": "exclusive_write", 00:22:12.320 "zoned": false, 00:22:12.320 "supported_io_types": { 00:22:12.320 "read": true, 00:22:12.320 "write": true, 00:22:12.320 "unmap": true, 00:22:12.320 "flush": true, 00:22:12.320 "reset": true, 00:22:12.320 "nvme_admin": false, 00:22:12.320 "nvme_io": false, 00:22:12.320 "nvme_io_md": false, 00:22:12.320 "write_zeroes": true, 00:22:12.320 "zcopy": true, 00:22:12.320 "get_zone_info": false, 00:22:12.320 "zone_management": false, 00:22:12.320 "zone_append": false, 00:22:12.320 "compare": false, 00:22:12.320 "compare_and_write": false, 00:22:12.320 "abort": true, 00:22:12.320 "seek_hole": false, 00:22:12.320 "seek_data": false, 00:22:12.320 "copy": true, 00:22:12.320 "nvme_iov_md": false 00:22:12.320 }, 00:22:12.320 "memory_domains": [ 00:22:12.320 { 00:22:12.320 "dma_device_id": "system", 00:22:12.320 "dma_device_type": 1 00:22:12.320 }, 00:22:12.320 { 00:22:12.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.320 "dma_device_type": 2 00:22:12.320 } 00:22:12.320 ], 00:22:12.320 "driver_specific": {} 00:22:12.320 } 00:22:12.320 ] 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.320 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.321 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.321 11:34:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.321 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.321 "name": "Existed_Raid", 00:22:12.321 "uuid": "d377a1b5-3d0f-45de-af54-060d17dd6ee7", 00:22:12.321 "strip_size_kb": 0, 00:22:12.321 "state": "configuring", 00:22:12.321 "raid_level": "raid1", 00:22:12.321 "superblock": true, 00:22:12.321 "num_base_bdevs": 2, 00:22:12.321 "num_base_bdevs_discovered": 1, 00:22:12.321 "num_base_bdevs_operational": 2, 00:22:12.321 "base_bdevs_list": [ 00:22:12.321 { 00:22:12.321 "name": "BaseBdev1", 00:22:12.321 "uuid": "4fc93b03-e3fb-402c-aa84-cee30d0f6341", 00:22:12.321 "is_configured": true, 00:22:12.321 "data_offset": 256, 00:22:12.321 "data_size": 7936 00:22:12.321 }, 00:22:12.321 { 00:22:12.321 "name": "BaseBdev2", 00:22:12.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.321 "is_configured": false, 00:22:12.321 "data_offset": 0, 00:22:12.321 "data_size": 0 00:22:12.321 } 00:22:12.321 ] 00:22:12.321 }' 00:22:12.321 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.321 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.890 [2024-11-20 11:34:20.448411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.890 [2024-11-20 11:34:20.448478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.890 [2024-11-20 11:34:20.456449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.890 [2024-11-20 11:34:20.458962] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.890 [2024-11-20 11:34:20.459156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.890 "name": "Existed_Raid", 00:22:12.890 "uuid": "e657d363-b5ac-4ed9-9096-204335838728", 00:22:12.890 "strip_size_kb": 0, 00:22:12.890 "state": "configuring", 00:22:12.890 "raid_level": "raid1", 00:22:12.890 "superblock": true, 00:22:12.890 "num_base_bdevs": 2, 00:22:12.890 "num_base_bdevs_discovered": 1, 00:22:12.890 "num_base_bdevs_operational": 2, 00:22:12.890 "base_bdevs_list": [ 00:22:12.890 { 00:22:12.890 "name": "BaseBdev1", 00:22:12.890 "uuid": "4fc93b03-e3fb-402c-aa84-cee30d0f6341", 00:22:12.890 "is_configured": true, 00:22:12.890 "data_offset": 256, 00:22:12.890 "data_size": 7936 00:22:12.890 }, 00:22:12.890 { 00:22:12.890 "name": "BaseBdev2", 00:22:12.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.890 "is_configured": false, 00:22:12.890 "data_offset": 0, 00:22:12.890 "data_size": 0 00:22:12.890 } 00:22:12.890 ] 00:22:12.890 }' 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.890 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.152 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:13.152 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.152 11:34:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.411 [2024-11-20 11:34:21.012194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.411 [2024-11-20 11:34:21.012482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:13.411 [2024-11-20 11:34:21.012502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:13.411 [2024-11-20 11:34:21.012608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:13.411 [2024-11-20 11:34:21.012794] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:13.411 [2024-11-20 11:34:21.012823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:13.411 BaseBdev2 00:22:13.411 [2024-11-20 11:34:21.013005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.411 [ 00:22:13.411 { 00:22:13.411 "name": "BaseBdev2", 00:22:13.411 "aliases": [ 00:22:13.411 "ea01d6c0-35fe-4654-837a-9dd506f104fb" 00:22:13.411 ], 00:22:13.411 "product_name": "Malloc disk", 00:22:13.411 "block_size": 4096, 00:22:13.411 "num_blocks": 8192, 00:22:13.411 "uuid": "ea01d6c0-35fe-4654-837a-9dd506f104fb", 00:22:13.411 "md_size": 32, 00:22:13.411 "md_interleave": false, 00:22:13.411 "dif_type": 0, 00:22:13.411 "assigned_rate_limits": { 00:22:13.411 "rw_ios_per_sec": 0, 00:22:13.411 "rw_mbytes_per_sec": 0, 00:22:13.411 "r_mbytes_per_sec": 0, 00:22:13.411 "w_mbytes_per_sec": 0 00:22:13.411 }, 00:22:13.411 "claimed": true, 00:22:13.411 "claim_type": "exclusive_write", 00:22:13.411 "zoned": false, 00:22:13.411 "supported_io_types": { 00:22:13.411 "read": true, 00:22:13.411 "write": true, 00:22:13.411 "unmap": true, 00:22:13.411 "flush": true, 00:22:13.411 "reset": true, 00:22:13.411 "nvme_admin": false, 00:22:13.411 "nvme_io": false, 00:22:13.411 "nvme_io_md": false, 00:22:13.411 "write_zeroes": true, 00:22:13.411 "zcopy": true, 00:22:13.411 "get_zone_info": false, 00:22:13.411 "zone_management": false, 00:22:13.411 "zone_append": false, 00:22:13.411 "compare": false, 00:22:13.411 "compare_and_write": false, 00:22:13.411 "abort": true, 00:22:13.411 "seek_hole": false, 00:22:13.411 "seek_data": false, 00:22:13.411 "copy": true, 00:22:13.411 "nvme_iov_md": false 00:22:13.411 }, 00:22:13.411 "memory_domains": [ 00:22:13.411 { 00:22:13.411 "dma_device_id": "system", 00:22:13.411 "dma_device_type": 1 00:22:13.411 }, 00:22:13.411 { 00:22:13.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.411 "dma_device_type": 2 00:22:13.411 } 00:22:13.411 ], 00:22:13.411 "driver_specific": {} 00:22:13.411 } 00:22:13.411 ] 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.411 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.412 "name": "Existed_Raid", 00:22:13.412 "uuid": "e657d363-b5ac-4ed9-9096-204335838728", 00:22:13.412 "strip_size_kb": 0, 00:22:13.412 "state": "online", 00:22:13.412 "raid_level": "raid1", 00:22:13.412 "superblock": true, 00:22:13.412 "num_base_bdevs": 2, 00:22:13.412 "num_base_bdevs_discovered": 2, 00:22:13.412 "num_base_bdevs_operational": 2, 00:22:13.412 "base_bdevs_list": [ 00:22:13.412 { 00:22:13.412 "name": "BaseBdev1", 00:22:13.412 "uuid": "4fc93b03-e3fb-402c-aa84-cee30d0f6341", 00:22:13.412 "is_configured": true, 00:22:13.412 "data_offset": 256, 00:22:13.412 "data_size": 7936 00:22:13.412 }, 00:22:13.412 { 00:22:13.412 "name": "BaseBdev2", 00:22:13.412 "uuid": "ea01d6c0-35fe-4654-837a-9dd506f104fb", 00:22:13.412 "is_configured": true, 00:22:13.412 "data_offset": 256, 00:22:13.412 "data_size": 7936 00:22:13.412 } 00:22:13.412 ] 00:22:13.412 }' 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.412 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.978 [2024-11-20 11:34:21.568875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.978 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.978 "name": "Existed_Raid", 00:22:13.978 "aliases": [ 00:22:13.978 "e657d363-b5ac-4ed9-9096-204335838728" 00:22:13.978 ], 00:22:13.978 "product_name": "Raid Volume", 00:22:13.978 "block_size": 4096, 00:22:13.978 "num_blocks": 7936, 00:22:13.978 "uuid": "e657d363-b5ac-4ed9-9096-204335838728", 00:22:13.978 "md_size": 32, 00:22:13.978 "md_interleave": false, 00:22:13.978 "dif_type": 0, 00:22:13.978 "assigned_rate_limits": { 00:22:13.978 "rw_ios_per_sec": 0, 00:22:13.978 "rw_mbytes_per_sec": 0, 00:22:13.978 "r_mbytes_per_sec": 0, 00:22:13.978 "w_mbytes_per_sec": 0 00:22:13.978 }, 00:22:13.978 "claimed": false, 00:22:13.978 "zoned": false, 00:22:13.978 "supported_io_types": { 00:22:13.978 "read": true, 00:22:13.978 "write": true, 00:22:13.978 "unmap": false, 00:22:13.978 "flush": false, 00:22:13.978 "reset": true, 00:22:13.978 "nvme_admin": false, 00:22:13.978 "nvme_io": false, 00:22:13.978 "nvme_io_md": false, 00:22:13.978 "write_zeroes": true, 00:22:13.978 "zcopy": false, 00:22:13.978 "get_zone_info": false, 00:22:13.978 "zone_management": false, 00:22:13.978 "zone_append": false, 00:22:13.978 "compare": false, 00:22:13.978 "compare_and_write": false, 00:22:13.978 "abort": false, 00:22:13.978 "seek_hole": false, 00:22:13.978 "seek_data": false, 00:22:13.978 "copy": false, 00:22:13.978 "nvme_iov_md": false 00:22:13.978 }, 00:22:13.978 "memory_domains": [ 00:22:13.978 { 00:22:13.978 "dma_device_id": "system", 00:22:13.978 "dma_device_type": 1 00:22:13.978 }, 00:22:13.978 { 00:22:13.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.979 "dma_device_type": 2 00:22:13.979 }, 00:22:13.979 { 00:22:13.979 "dma_device_id": "system", 00:22:13.979 "dma_device_type": 1 00:22:13.979 }, 00:22:13.979 { 00:22:13.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.979 "dma_device_type": 2 00:22:13.979 } 00:22:13.979 ], 00:22:13.979 "driver_specific": { 00:22:13.979 "raid": { 00:22:13.979 "uuid": "e657d363-b5ac-4ed9-9096-204335838728", 00:22:13.979 "strip_size_kb": 0, 00:22:13.979 "state": "online", 00:22:13.979 "raid_level": "raid1", 00:22:13.979 "superblock": true, 00:22:13.979 "num_base_bdevs": 2, 00:22:13.979 "num_base_bdevs_discovered": 2, 00:22:13.979 "num_base_bdevs_operational": 2, 00:22:13.979 "base_bdevs_list": [ 00:22:13.979 { 00:22:13.979 "name": "BaseBdev1", 00:22:13.979 "uuid": "4fc93b03-e3fb-402c-aa84-cee30d0f6341", 00:22:13.979 "is_configured": true, 00:22:13.979 "data_offset": 256, 00:22:13.979 "data_size": 7936 00:22:13.979 }, 00:22:13.979 { 00:22:13.979 "name": "BaseBdev2", 00:22:13.979 "uuid": "ea01d6c0-35fe-4654-837a-9dd506f104fb", 00:22:13.979 "is_configured": true, 00:22:13.979 "data_offset": 256, 00:22:13.979 "data_size": 7936 00:22:13.979 } 00:22:13.979 ] 00:22:13.979 } 00:22:13.979 } 00:22:13.979 }' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:13.979 BaseBdev2' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.979 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.238 [2024-11-20 11:34:21.824572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.238 "name": "Existed_Raid", 00:22:14.238 "uuid": "e657d363-b5ac-4ed9-9096-204335838728", 00:22:14.238 "strip_size_kb": 0, 00:22:14.238 "state": "online", 00:22:14.238 "raid_level": "raid1", 00:22:14.238 "superblock": true, 00:22:14.238 "num_base_bdevs": 2, 00:22:14.238 "num_base_bdevs_discovered": 1, 00:22:14.238 "num_base_bdevs_operational": 1, 00:22:14.238 "base_bdevs_list": [ 00:22:14.238 { 00:22:14.238 "name": null, 00:22:14.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.238 "is_configured": false, 00:22:14.238 "data_offset": 0, 00:22:14.238 "data_size": 7936 00:22:14.238 }, 00:22:14.238 { 00:22:14.238 "name": "BaseBdev2", 00:22:14.238 "uuid": "ea01d6c0-35fe-4654-837a-9dd506f104fb", 00:22:14.238 "is_configured": true, 00:22:14.238 "data_offset": 256, 00:22:14.238 "data_size": 7936 00:22:14.238 } 00:22:14.238 ] 00:22:14.238 }' 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.238 11:34:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.805 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:14.805 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:14.805 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.805 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.806 [2024-11-20 11:34:22.525696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:14.806 [2024-11-20 11:34:22.525896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.806 [2024-11-20 11:34:22.627457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:14.806 [2024-11-20 11:34:22.629717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.806 [2024-11-20 11:34:22.629819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.806 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87566 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87566 ']' 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87566 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87566 00:22:15.065 killing process with pid 87566 00:22:15.065 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.066 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.066 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87566' 00:22:15.066 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87566 00:22:15.066 11:34:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87566 00:22:15.066 [2024-11-20 11:34:22.718382] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.066 [2024-11-20 11:34:22.737099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:16.001 11:34:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:22:16.001 00:22:16.001 real 0m5.558s 00:22:16.001 user 0m8.364s 00:22:16.001 sys 0m0.788s 00:22:16.001 11:34:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.001 ************************************ 00:22:16.001 END TEST raid_state_function_test_sb_md_separate 00:22:16.001 ************************************ 00:22:16.001 11:34:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.260 11:34:23 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:16.260 11:34:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:16.260 11:34:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.260 11:34:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:16.260 ************************************ 00:22:16.260 START TEST raid_superblock_test_md_separate 00:22:16.260 ************************************ 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87813 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87813 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87813 ']' 00:22:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.260 11:34:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.260 [2024-11-20 11:34:23.978693] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:22:16.260 [2024-11-20 11:34:23.978874] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87813 ] 00:22:16.519 [2024-11-20 11:34:24.165557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.519 [2024-11-20 11:34:24.342993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.777 [2024-11-20 11:34:24.596979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.777 [2024-11-20 11:34:24.597052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.343 11:34:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.343 malloc1 00:22:17.343 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.343 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.343 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.343 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.343 [2024-11-20 11:34:25.017381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.344 [2024-11-20 11:34:25.018043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.344 [2024-11-20 11:34:25.018138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:17.344 [2024-11-20 11:34:25.018279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.344 [2024-11-20 11:34:25.021584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.344 [2024-11-20 11:34:25.021767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.344 pt1 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.344 malloc2 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.344 [2024-11-20 11:34:25.097207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.344 [2024-11-20 11:34:25.097342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.344 [2024-11-20 11:34:25.097409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:17.344 [2024-11-20 11:34:25.097439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.344 [2024-11-20 11:34:25.101179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.344 [2024-11-20 11:34:25.101242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.344 pt2 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.344 [2024-11-20 11:34:25.109571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.344 [2024-11-20 11:34:25.112577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.344 [2024-11-20 11:34:25.112849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:17.344 [2024-11-20 11:34:25.112872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:17.344 [2024-11-20 11:34:25.112994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:17.344 [2024-11-20 11:34:25.113186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:17.344 [2024-11-20 11:34:25.113206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:17.344 [2024-11-20 11:34:25.113380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.344 "name": "raid_bdev1", 00:22:17.344 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:17.344 "strip_size_kb": 0, 00:22:17.344 "state": "online", 00:22:17.344 "raid_level": "raid1", 00:22:17.344 "superblock": true, 00:22:17.344 "num_base_bdevs": 2, 00:22:17.344 "num_base_bdevs_discovered": 2, 00:22:17.344 "num_base_bdevs_operational": 2, 00:22:17.344 "base_bdevs_list": [ 00:22:17.344 { 00:22:17.344 "name": "pt1", 00:22:17.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.344 "is_configured": true, 00:22:17.344 "data_offset": 256, 00:22:17.344 "data_size": 7936 00:22:17.344 }, 00:22:17.344 { 00:22:17.344 "name": "pt2", 00:22:17.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.344 "is_configured": true, 00:22:17.344 "data_offset": 256, 00:22:17.344 "data_size": 7936 00:22:17.344 } 00:22:17.344 ] 00:22:17.344 }' 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.344 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:17.912 [2024-11-20 11:34:25.666192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:17.912 "name": "raid_bdev1", 00:22:17.912 "aliases": [ 00:22:17.912 "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9" 00:22:17.912 ], 00:22:17.912 "product_name": "Raid Volume", 00:22:17.912 "block_size": 4096, 00:22:17.912 "num_blocks": 7936, 00:22:17.912 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:17.912 "md_size": 32, 00:22:17.912 "md_interleave": false, 00:22:17.912 "dif_type": 0, 00:22:17.912 "assigned_rate_limits": { 00:22:17.912 "rw_ios_per_sec": 0, 00:22:17.912 "rw_mbytes_per_sec": 0, 00:22:17.912 "r_mbytes_per_sec": 0, 00:22:17.912 "w_mbytes_per_sec": 0 00:22:17.912 }, 00:22:17.912 "claimed": false, 00:22:17.912 "zoned": false, 00:22:17.912 "supported_io_types": { 00:22:17.912 "read": true, 00:22:17.912 "write": true, 00:22:17.912 "unmap": false, 00:22:17.912 "flush": false, 00:22:17.912 "reset": true, 00:22:17.912 "nvme_admin": false, 00:22:17.912 "nvme_io": false, 00:22:17.912 "nvme_io_md": false, 00:22:17.912 "write_zeroes": true, 00:22:17.912 "zcopy": false, 00:22:17.912 "get_zone_info": false, 00:22:17.912 "zone_management": false, 00:22:17.912 "zone_append": false, 00:22:17.912 "compare": false, 00:22:17.912 "compare_and_write": false, 00:22:17.912 "abort": false, 00:22:17.912 "seek_hole": false, 00:22:17.912 "seek_data": false, 00:22:17.912 "copy": false, 00:22:17.912 "nvme_iov_md": false 00:22:17.912 }, 00:22:17.912 "memory_domains": [ 00:22:17.912 { 00:22:17.912 "dma_device_id": "system", 00:22:17.912 "dma_device_type": 1 00:22:17.912 }, 00:22:17.912 { 00:22:17.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.912 "dma_device_type": 2 00:22:17.912 }, 00:22:17.912 { 00:22:17.912 "dma_device_id": "system", 00:22:17.912 "dma_device_type": 1 00:22:17.912 }, 00:22:17.912 { 00:22:17.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.912 "dma_device_type": 2 00:22:17.912 } 00:22:17.912 ], 00:22:17.912 "driver_specific": { 00:22:17.912 "raid": { 00:22:17.912 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:17.912 "strip_size_kb": 0, 00:22:17.912 "state": "online", 00:22:17.912 "raid_level": "raid1", 00:22:17.912 "superblock": true, 00:22:17.912 "num_base_bdevs": 2, 00:22:17.912 "num_base_bdevs_discovered": 2, 00:22:17.912 "num_base_bdevs_operational": 2, 00:22:17.912 "base_bdevs_list": [ 00:22:17.912 { 00:22:17.912 "name": "pt1", 00:22:17.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:17.912 "is_configured": true, 00:22:17.912 "data_offset": 256, 00:22:17.912 "data_size": 7936 00:22:17.912 }, 00:22:17.912 { 00:22:17.912 "name": "pt2", 00:22:17.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.912 "is_configured": true, 00:22:17.912 "data_offset": 256, 00:22:17.912 "data_size": 7936 00:22:17.912 } 00:22:17.912 ] 00:22:17.912 } 00:22:17.912 } 00:22:17.912 }' 00:22:17.912 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:18.171 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:18.171 pt2' 00:22:18.171 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.171 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:18.171 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:18.171 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:18.171 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.172 [2024-11-20 11:34:25.946101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2871c7e2-2a71-47ab-97f8-523bf9bf0bf9 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 2871c7e2-2a71-47ab-97f8-523bf9bf0bf9 ']' 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.172 [2024-11-20 11:34:25.993714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.172 [2024-11-20 11:34:25.993784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.172 [2024-11-20 11:34:25.993906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.172 [2024-11-20 11:34:25.994018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.172 [2024-11-20 11:34:25.994038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.172 11:34:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.172 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.172 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.172 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:18.172 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 [2024-11-20 11:34:26.137791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:18.431 [2024-11-20 11:34:26.141005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:18.431 [2024-11-20 11:34:26.141329] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:18.431 [2024-11-20 11:34:26.141567] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:18.431 [2024-11-20 11:34:26.141878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.431 [2024-11-20 11:34:26.142062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:18.431 request: 00:22:18.431 { 00:22:18.431 "name": "raid_bdev1", 00:22:18.431 "raid_level": "raid1", 00:22:18.431 "base_bdevs": [ 00:22:18.431 "malloc1", 00:22:18.431 "malloc2" 00:22:18.431 ], 00:22:18.431 "superblock": false, 00:22:18.431 "method": "bdev_raid_create", 00:22:18.431 "req_id": 1 00:22:18.431 } 00:22:18.431 Got JSON-RPC error response 00:22:18.431 response: 00:22:18.431 { 00:22:18.431 "code": -17, 00:22:18.431 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:18.431 } 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 [2024-11-20 11:34:26.206415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:18.431 [2024-11-20 11:34:26.206670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.431 [2024-11-20 11:34:26.206749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:18.431 [2024-11-20 11:34:26.206879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.431 [2024-11-20 11:34:26.209910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.431 [2024-11-20 11:34:26.210077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:18.431 [2024-11-20 11:34:26.210378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:18.431 [2024-11-20 11:34:26.210469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.431 pt1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.431 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.432 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.432 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.432 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.432 "name": "raid_bdev1", 00:22:18.432 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:18.432 "strip_size_kb": 0, 00:22:18.432 "state": "configuring", 00:22:18.432 "raid_level": "raid1", 00:22:18.432 "superblock": true, 00:22:18.432 "num_base_bdevs": 2, 00:22:18.432 "num_base_bdevs_discovered": 1, 00:22:18.432 "num_base_bdevs_operational": 2, 00:22:18.432 "base_bdevs_list": [ 00:22:18.432 { 00:22:18.432 "name": "pt1", 00:22:18.432 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:18.432 "is_configured": true, 00:22:18.432 "data_offset": 256, 00:22:18.432 "data_size": 7936 00:22:18.432 }, 00:22:18.432 { 00:22:18.432 "name": null, 00:22:18.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:18.432 "is_configured": false, 00:22:18.432 "data_offset": 256, 00:22:18.432 "data_size": 7936 00:22:18.432 } 00:22:18.432 ] 00:22:18.432 }' 00:22:18.432 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.432 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.999 [2024-11-20 11:34:26.734719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.999 [2024-11-20 11:34:26.734869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.999 [2024-11-20 11:34:26.734907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:18.999 [2024-11-20 11:34:26.734927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.999 [2024-11-20 11:34:26.735271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.999 [2024-11-20 11:34:26.735310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.999 [2024-11-20 11:34:26.735388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:18.999 [2024-11-20 11:34:26.735427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.999 [2024-11-20 11:34:26.735584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:18.999 [2024-11-20 11:34:26.735626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:18.999 [2024-11-20 11:34:26.735722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:18.999 [2024-11-20 11:34:26.735873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:18.999 [2024-11-20 11:34:26.735906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:18.999 [2024-11-20 11:34:26.736039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.999 pt2 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.999 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.000 "name": "raid_bdev1", 00:22:19.000 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:19.000 "strip_size_kb": 0, 00:22:19.000 "state": "online", 00:22:19.000 "raid_level": "raid1", 00:22:19.000 "superblock": true, 00:22:19.000 "num_base_bdevs": 2, 00:22:19.000 "num_base_bdevs_discovered": 2, 00:22:19.000 "num_base_bdevs_operational": 2, 00:22:19.000 "base_bdevs_list": [ 00:22:19.000 { 00:22:19.000 "name": "pt1", 00:22:19.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.000 "is_configured": true, 00:22:19.000 "data_offset": 256, 00:22:19.000 "data_size": 7936 00:22:19.000 }, 00:22:19.000 { 00:22:19.000 "name": "pt2", 00:22:19.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.000 "is_configured": true, 00:22:19.000 "data_offset": 256, 00:22:19.000 "data_size": 7936 00:22:19.000 } 00:22:19.000 ] 00:22:19.000 }' 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.000 11:34:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:19.568 [2024-11-20 11:34:27.235221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.568 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.568 "name": "raid_bdev1", 00:22:19.568 "aliases": [ 00:22:19.568 "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9" 00:22:19.568 ], 00:22:19.569 "product_name": "Raid Volume", 00:22:19.569 "block_size": 4096, 00:22:19.569 "num_blocks": 7936, 00:22:19.569 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:19.569 "md_size": 32, 00:22:19.569 "md_interleave": false, 00:22:19.569 "dif_type": 0, 00:22:19.569 "assigned_rate_limits": { 00:22:19.569 "rw_ios_per_sec": 0, 00:22:19.569 "rw_mbytes_per_sec": 0, 00:22:19.569 "r_mbytes_per_sec": 0, 00:22:19.569 "w_mbytes_per_sec": 0 00:22:19.569 }, 00:22:19.569 "claimed": false, 00:22:19.569 "zoned": false, 00:22:19.569 "supported_io_types": { 00:22:19.569 "read": true, 00:22:19.569 "write": true, 00:22:19.569 "unmap": false, 00:22:19.569 "flush": false, 00:22:19.569 "reset": true, 00:22:19.569 "nvme_admin": false, 00:22:19.569 "nvme_io": false, 00:22:19.569 "nvme_io_md": false, 00:22:19.569 "write_zeroes": true, 00:22:19.569 "zcopy": false, 00:22:19.569 "get_zone_info": false, 00:22:19.569 "zone_management": false, 00:22:19.569 "zone_append": false, 00:22:19.569 "compare": false, 00:22:19.569 "compare_and_write": false, 00:22:19.569 "abort": false, 00:22:19.569 "seek_hole": false, 00:22:19.569 "seek_data": false, 00:22:19.569 "copy": false, 00:22:19.569 "nvme_iov_md": false 00:22:19.569 }, 00:22:19.569 "memory_domains": [ 00:22:19.569 { 00:22:19.569 "dma_device_id": "system", 00:22:19.569 "dma_device_type": 1 00:22:19.569 }, 00:22:19.569 { 00:22:19.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.569 "dma_device_type": 2 00:22:19.569 }, 00:22:19.569 { 00:22:19.569 "dma_device_id": "system", 00:22:19.569 "dma_device_type": 1 00:22:19.569 }, 00:22:19.569 { 00:22:19.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.569 "dma_device_type": 2 00:22:19.569 } 00:22:19.569 ], 00:22:19.569 "driver_specific": { 00:22:19.569 "raid": { 00:22:19.569 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:19.569 "strip_size_kb": 0, 00:22:19.569 "state": "online", 00:22:19.569 "raid_level": "raid1", 00:22:19.569 "superblock": true, 00:22:19.569 "num_base_bdevs": 2, 00:22:19.569 "num_base_bdevs_discovered": 2, 00:22:19.569 "num_base_bdevs_operational": 2, 00:22:19.569 "base_bdevs_list": [ 00:22:19.569 { 00:22:19.569 "name": "pt1", 00:22:19.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.569 "is_configured": true, 00:22:19.569 "data_offset": 256, 00:22:19.569 "data_size": 7936 00:22:19.569 }, 00:22:19.569 { 00:22:19.569 "name": "pt2", 00:22:19.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.569 "is_configured": true, 00:22:19.569 "data_offset": 256, 00:22:19.569 "data_size": 7936 00:22:19.569 } 00:22:19.569 ] 00:22:19.569 } 00:22:19.569 } 00:22:19.569 }' 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:19.569 pt2' 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.569 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:19.829 [2024-11-20 11:34:27.512008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 2871c7e2-2a71-47ab-97f8-523bf9bf0bf9 '!=' 2871c7e2-2a71-47ab-97f8-523bf9bf0bf9 ']' 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.829 [2024-11-20 11:34:27.559753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.829 "name": "raid_bdev1", 00:22:19.829 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:19.829 "strip_size_kb": 0, 00:22:19.829 "state": "online", 00:22:19.829 "raid_level": "raid1", 00:22:19.829 "superblock": true, 00:22:19.829 "num_base_bdevs": 2, 00:22:19.829 "num_base_bdevs_discovered": 1, 00:22:19.829 "num_base_bdevs_operational": 1, 00:22:19.829 "base_bdevs_list": [ 00:22:19.829 { 00:22:19.829 "name": null, 00:22:19.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.829 "is_configured": false, 00:22:19.829 "data_offset": 0, 00:22:19.829 "data_size": 7936 00:22:19.829 }, 00:22:19.829 { 00:22:19.829 "name": "pt2", 00:22:19.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.829 "is_configured": true, 00:22:19.829 "data_offset": 256, 00:22:19.829 "data_size": 7936 00:22:19.829 } 00:22:19.829 ] 00:22:19.829 }' 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.829 11:34:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.396 [2024-11-20 11:34:28.024382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.396 [2024-11-20 11:34:28.024454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.396 [2024-11-20 11:34:28.024573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.396 [2024-11-20 11:34:28.024679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.396 [2024-11-20 11:34:28.024703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.396 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.396 [2024-11-20 11:34:28.096338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.396 [2024-11-20 11:34:28.096453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.396 [2024-11-20 11:34:28.096483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:20.396 [2024-11-20 11:34:28.096501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.397 [2024-11-20 11:34:28.099394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.397 pt2 00:22:20.397 [2024-11-20 11:34:28.099690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.397 [2024-11-20 11:34:28.099793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:20.397 [2024-11-20 11:34:28.099866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.397 [2024-11-20 11:34:28.099999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:20.397 [2024-11-20 11:34:28.100021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:20.397 [2024-11-20 11:34:28.100118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:20.397 [2024-11-20 11:34:28.100264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:20.397 [2024-11-20 11:34:28.100278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:20.397 [2024-11-20 11:34:28.100463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.397 "name": "raid_bdev1", 00:22:20.397 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:20.397 "strip_size_kb": 0, 00:22:20.397 "state": "online", 00:22:20.397 "raid_level": "raid1", 00:22:20.397 "superblock": true, 00:22:20.397 "num_base_bdevs": 2, 00:22:20.397 "num_base_bdevs_discovered": 1, 00:22:20.397 "num_base_bdevs_operational": 1, 00:22:20.397 "base_bdevs_list": [ 00:22:20.397 { 00:22:20.397 "name": null, 00:22:20.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.397 "is_configured": false, 00:22:20.397 "data_offset": 256, 00:22:20.397 "data_size": 7936 00:22:20.397 }, 00:22:20.397 { 00:22:20.397 "name": "pt2", 00:22:20.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.397 "is_configured": true, 00:22:20.397 "data_offset": 256, 00:22:20.397 "data_size": 7936 00:22:20.397 } 00:22:20.397 ] 00:22:20.397 }' 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.397 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.965 [2024-11-20 11:34:28.604651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.965 [2024-11-20 11:34:28.604715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.965 [2024-11-20 11:34:28.604824] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.965 [2024-11-20 11:34:28.604903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.965 [2024-11-20 11:34:28.604919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.965 [2024-11-20 11:34:28.668762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:20.965 [2024-11-20 11:34:28.669484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.965 [2024-11-20 11:34:28.669542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:20.965 [2024-11-20 11:34:28.669561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.965 [2024-11-20 11:34:28.672511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.965 [2024-11-20 11:34:28.672706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:20.965 [2024-11-20 11:34:28.672814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:20.965 [2024-11-20 11:34:28.672885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:20.965 [2024-11-20 11:34:28.673119] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:20.965 [2024-11-20 11:34:28.673139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.965 [2024-11-20 11:34:28.673168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:20.965 [2024-11-20 11:34:28.673249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.965 [2024-11-20 11:34:28.673357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:20.965 [2024-11-20 11:34:28.673379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:20.965 pt1 00:22:20.965 [2024-11-20 11:34:28.673477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:20.965 [2024-11-20 11:34:28.673641] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:20.965 [2024-11-20 11:34:28.673661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:20.965 [2024-11-20 11:34:28.673799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.965 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.965 "name": "raid_bdev1", 00:22:20.965 "uuid": "2871c7e2-2a71-47ab-97f8-523bf9bf0bf9", 00:22:20.965 "strip_size_kb": 0, 00:22:20.965 "state": "online", 00:22:20.965 "raid_level": "raid1", 00:22:20.965 "superblock": true, 00:22:20.965 "num_base_bdevs": 2, 00:22:20.965 "num_base_bdevs_discovered": 1, 00:22:20.965 "num_base_bdevs_operational": 1, 00:22:20.965 "base_bdevs_list": [ 00:22:20.965 { 00:22:20.965 "name": null, 00:22:20.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.965 "is_configured": false, 00:22:20.965 "data_offset": 256, 00:22:20.965 "data_size": 7936 00:22:20.965 }, 00:22:20.965 { 00:22:20.965 "name": "pt2", 00:22:20.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.965 "is_configured": true, 00:22:20.965 "data_offset": 256, 00:22:20.965 "data_size": 7936 00:22:20.965 } 00:22:20.965 ] 00:22:20.965 }' 00:22:20.966 11:34:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.966 11:34:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.544 [2024-11-20 11:34:29.241387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 2871c7e2-2a71-47ab-97f8-523bf9bf0bf9 '!=' 2871c7e2-2a71-47ab-97f8-523bf9bf0bf9 ']' 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87813 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87813 ']' 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87813 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87813 00:22:21.544 killing process with pid 87813 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87813' 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87813 00:22:21.544 [2024-11-20 11:34:29.315489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:21.544 11:34:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87813 00:22:21.544 [2024-11-20 11:34:29.315639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.544 [2024-11-20 11:34:29.315736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:21.544 [2024-11-20 11:34:29.315768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:21.803 [2024-11-20 11:34:29.532116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:23.216 11:34:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:22:23.216 00:22:23.216 real 0m6.783s 00:22:23.216 user 0m10.562s 00:22:23.216 sys 0m1.048s 00:22:23.216 ************************************ 00:22:23.216 END TEST raid_superblock_test_md_separate 00:22:23.216 ************************************ 00:22:23.216 11:34:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.216 11:34:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 11:34:30 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:22:23.216 11:34:30 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:23.216 11:34:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:23.216 11:34:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.216 11:34:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 ************************************ 00:22:23.216 START TEST raid_rebuild_test_sb_md_separate 00:22:23.216 ************************************ 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88147 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88147 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88147 ']' 00:22:23.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.216 11:34:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.216 [2024-11-20 11:34:30.871979] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:22:23.216 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:23.216 Zero copy mechanism will not be used. 00:22:23.216 [2024-11-20 11:34:30.872575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88147 ] 00:22:23.476 [2024-11-20 11:34:31.153605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.476 [2024-11-20 11:34:31.309647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.735 [2024-11-20 11:34:31.528768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.735 [2024-11-20 11:34:31.529174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.303 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.303 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:24.303 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 BaseBdev1_malloc 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 [2024-11-20 11:34:31.919036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:24.304 [2024-11-20 11:34:31.919136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.304 [2024-11-20 11:34:31.919172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:24.304 [2024-11-20 11:34:31.919191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.304 [2024-11-20 11:34:31.921833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.304 [2024-11-20 11:34:31.922197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:24.304 BaseBdev1 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 BaseBdev2_malloc 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 [2024-11-20 11:34:31.976771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:24.304 [2024-11-20 11:34:31.976876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.304 [2024-11-20 11:34:31.976909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:24.304 [2024-11-20 11:34:31.976929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.304 [2024-11-20 11:34:31.979744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.304 [2024-11-20 11:34:31.979793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:24.304 BaseBdev2 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 spare_malloc 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 spare_delay 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 [2024-11-20 11:34:32.054160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:24.304 [2024-11-20 11:34:32.054261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.304 [2024-11-20 11:34:32.054295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:24.304 [2024-11-20 11:34:32.054315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.304 [2024-11-20 11:34:32.057093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.304 [2024-11-20 11:34:32.057141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:24.304 spare 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 [2024-11-20 11:34:32.062245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.304 [2024-11-20 11:34:32.065432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.304 [2024-11-20 11:34:32.065716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:24.304 [2024-11-20 11:34:32.065741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:24.304 [2024-11-20 11:34:32.065839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:24.304 [2024-11-20 11:34:32.066008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:24.304 [2024-11-20 11:34:32.066023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:24.304 [2024-11-20 11:34:32.066216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.304 "name": "raid_bdev1", 00:22:24.304 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:24.304 "strip_size_kb": 0, 00:22:24.304 "state": "online", 00:22:24.304 "raid_level": "raid1", 00:22:24.304 "superblock": true, 00:22:24.304 "num_base_bdevs": 2, 00:22:24.304 "num_base_bdevs_discovered": 2, 00:22:24.304 "num_base_bdevs_operational": 2, 00:22:24.304 "base_bdevs_list": [ 00:22:24.304 { 00:22:24.304 "name": "BaseBdev1", 00:22:24.304 "uuid": "69e0b21e-b346-5306-a7f6-cc0a6fa5f045", 00:22:24.304 "is_configured": true, 00:22:24.304 "data_offset": 256, 00:22:24.304 "data_size": 7936 00:22:24.304 }, 00:22:24.304 { 00:22:24.304 "name": "BaseBdev2", 00:22:24.304 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:24.304 "is_configured": true, 00:22:24.304 "data_offset": 256, 00:22:24.304 "data_size": 7936 00:22:24.304 } 00:22:24.304 ] 00:22:24.304 }' 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.304 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.882 [2024-11-20 11:34:32.582922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:24.882 11:34:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:25.469 [2024-11-20 11:34:33.062739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:25.469 /dev/nbd0 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:25.469 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:25.470 1+0 records in 00:22:25.470 1+0 records out 00:22:25.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324804 s, 12.6 MB/s 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:25.470 11:34:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:26.437 7936+0 records in 00:22:26.437 7936+0 records out 00:22:26.437 32505856 bytes (33 MB, 31 MiB) copied, 0.930442 s, 34.9 MB/s 00:22:26.437 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:26.437 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:26.437 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:26.437 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:26.437 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:26.437 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:26.437 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:26.696 [2024-11-20 11:34:34.383879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.696 [2024-11-20 11:34:34.400406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.696 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.697 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.697 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.697 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.697 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.697 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.697 "name": "raid_bdev1", 00:22:26.697 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:26.697 "strip_size_kb": 0, 00:22:26.697 "state": "online", 00:22:26.697 "raid_level": "raid1", 00:22:26.697 "superblock": true, 00:22:26.697 "num_base_bdevs": 2, 00:22:26.697 "num_base_bdevs_discovered": 1, 00:22:26.697 "num_base_bdevs_operational": 1, 00:22:26.697 "base_bdevs_list": [ 00:22:26.697 { 00:22:26.697 "name": null, 00:22:26.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.697 "is_configured": false, 00:22:26.697 "data_offset": 0, 00:22:26.697 "data_size": 7936 00:22:26.697 }, 00:22:26.697 { 00:22:26.697 "name": "BaseBdev2", 00:22:26.697 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:26.697 "is_configured": true, 00:22:26.697 "data_offset": 256, 00:22:26.697 "data_size": 7936 00:22:26.697 } 00:22:26.697 ] 00:22:26.697 }' 00:22:26.697 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.697 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.264 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:27.264 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.264 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.264 [2024-11-20 11:34:34.924566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.264 [2024-11-20 11:34:34.938288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:27.264 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.264 11:34:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:27.264 [2024-11-20 11:34:34.941320] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.231 11:34:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.231 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.231 "name": "raid_bdev1", 00:22:28.231 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:28.231 "strip_size_kb": 0, 00:22:28.231 "state": "online", 00:22:28.231 "raid_level": "raid1", 00:22:28.231 "superblock": true, 00:22:28.231 "num_base_bdevs": 2, 00:22:28.231 "num_base_bdevs_discovered": 2, 00:22:28.231 "num_base_bdevs_operational": 2, 00:22:28.231 "process": { 00:22:28.231 "type": "rebuild", 00:22:28.231 "target": "spare", 00:22:28.231 "progress": { 00:22:28.231 "blocks": 2560, 00:22:28.231 "percent": 32 00:22:28.231 } 00:22:28.231 }, 00:22:28.231 "base_bdevs_list": [ 00:22:28.231 { 00:22:28.231 "name": "spare", 00:22:28.231 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:28.231 "is_configured": true, 00:22:28.231 "data_offset": 256, 00:22:28.231 "data_size": 7936 00:22:28.231 }, 00:22:28.231 { 00:22:28.231 "name": "BaseBdev2", 00:22:28.231 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:28.231 "is_configured": true, 00:22:28.231 "data_offset": 256, 00:22:28.231 "data_size": 7936 00:22:28.231 } 00:22:28.231 ] 00:22:28.231 }' 00:22:28.231 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.231 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.231 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.490 [2024-11-20 11:34:36.124168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.490 [2024-11-20 11:34:36.153810] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:28.490 [2024-11-20 11:34:36.154593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.490 [2024-11-20 11:34:36.154624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.490 [2024-11-20 11:34:36.154641] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.490 "name": "raid_bdev1", 00:22:28.490 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:28.490 "strip_size_kb": 0, 00:22:28.490 "state": "online", 00:22:28.490 "raid_level": "raid1", 00:22:28.490 "superblock": true, 00:22:28.490 "num_base_bdevs": 2, 00:22:28.490 "num_base_bdevs_discovered": 1, 00:22:28.490 "num_base_bdevs_operational": 1, 00:22:28.490 "base_bdevs_list": [ 00:22:28.490 { 00:22:28.490 "name": null, 00:22:28.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.490 "is_configured": false, 00:22:28.490 "data_offset": 0, 00:22:28.490 "data_size": 7936 00:22:28.490 }, 00:22:28.490 { 00:22:28.490 "name": "BaseBdev2", 00:22:28.490 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:28.490 "is_configured": true, 00:22:28.490 "data_offset": 256, 00:22:28.490 "data_size": 7936 00:22:28.490 } 00:22:28.490 ] 00:22:28.490 }' 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.490 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:29.056 "name": "raid_bdev1", 00:22:29.056 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:29.056 "strip_size_kb": 0, 00:22:29.056 "state": "online", 00:22:29.056 "raid_level": "raid1", 00:22:29.056 "superblock": true, 00:22:29.056 "num_base_bdevs": 2, 00:22:29.056 "num_base_bdevs_discovered": 1, 00:22:29.056 "num_base_bdevs_operational": 1, 00:22:29.056 "base_bdevs_list": [ 00:22:29.056 { 00:22:29.056 "name": null, 00:22:29.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.056 "is_configured": false, 00:22:29.056 "data_offset": 0, 00:22:29.056 "data_size": 7936 00:22:29.056 }, 00:22:29.056 { 00:22:29.056 "name": "BaseBdev2", 00:22:29.056 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:29.056 "is_configured": true, 00:22:29.056 "data_offset": 256, 00:22:29.056 "data_size": 7936 00:22:29.056 } 00:22:29.056 ] 00:22:29.056 }' 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.056 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.056 [2024-11-20 11:34:36.868844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:29.056 [2024-11-20 11:34:36.881711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:29.057 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.057 11:34:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:29.057 [2024-11-20 11:34:36.884325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.434 "name": "raid_bdev1", 00:22:30.434 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:30.434 "strip_size_kb": 0, 00:22:30.434 "state": "online", 00:22:30.434 "raid_level": "raid1", 00:22:30.434 "superblock": true, 00:22:30.434 "num_base_bdevs": 2, 00:22:30.434 "num_base_bdevs_discovered": 2, 00:22:30.434 "num_base_bdevs_operational": 2, 00:22:30.434 "process": { 00:22:30.434 "type": "rebuild", 00:22:30.434 "target": "spare", 00:22:30.434 "progress": { 00:22:30.434 "blocks": 2304, 00:22:30.434 "percent": 29 00:22:30.434 } 00:22:30.434 }, 00:22:30.434 "base_bdevs_list": [ 00:22:30.434 { 00:22:30.434 "name": "spare", 00:22:30.434 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:30.434 "is_configured": true, 00:22:30.434 "data_offset": 256, 00:22:30.434 "data_size": 7936 00:22:30.434 }, 00:22:30.434 { 00:22:30.434 "name": "BaseBdev2", 00:22:30.434 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:30.434 "is_configured": true, 00:22:30.434 "data_offset": 256, 00:22:30.434 "data_size": 7936 00:22:30.434 } 00:22:30.434 ] 00:22:30.434 }' 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.434 11:34:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:30.434 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=768 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.434 "name": "raid_bdev1", 00:22:30.434 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:30.434 "strip_size_kb": 0, 00:22:30.434 "state": "online", 00:22:30.434 "raid_level": "raid1", 00:22:30.434 "superblock": true, 00:22:30.434 "num_base_bdevs": 2, 00:22:30.434 "num_base_bdevs_discovered": 2, 00:22:30.434 "num_base_bdevs_operational": 2, 00:22:30.434 "process": { 00:22:30.434 "type": "rebuild", 00:22:30.434 "target": "spare", 00:22:30.434 "progress": { 00:22:30.434 "blocks": 2816, 00:22:30.434 "percent": 35 00:22:30.434 } 00:22:30.434 }, 00:22:30.434 "base_bdevs_list": [ 00:22:30.434 { 00:22:30.434 "name": "spare", 00:22:30.434 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:30.434 "is_configured": true, 00:22:30.434 "data_offset": 256, 00:22:30.434 "data_size": 7936 00:22:30.434 }, 00:22:30.434 { 00:22:30.434 "name": "BaseBdev2", 00:22:30.434 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:30.434 "is_configured": true, 00:22:30.434 "data_offset": 256, 00:22:30.434 "data_size": 7936 00:22:30.434 } 00:22:30.434 ] 00:22:30.434 }' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.434 11:34:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.429 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.687 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.687 "name": "raid_bdev1", 00:22:31.687 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:31.687 "strip_size_kb": 0, 00:22:31.687 "state": "online", 00:22:31.687 "raid_level": "raid1", 00:22:31.687 "superblock": true, 00:22:31.687 "num_base_bdevs": 2, 00:22:31.687 "num_base_bdevs_discovered": 2, 00:22:31.687 "num_base_bdevs_operational": 2, 00:22:31.687 "process": { 00:22:31.687 "type": "rebuild", 00:22:31.687 "target": "spare", 00:22:31.687 "progress": { 00:22:31.687 "blocks": 5888, 00:22:31.687 "percent": 74 00:22:31.687 } 00:22:31.687 }, 00:22:31.687 "base_bdevs_list": [ 00:22:31.687 { 00:22:31.687 "name": "spare", 00:22:31.687 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:31.687 "is_configured": true, 00:22:31.687 "data_offset": 256, 00:22:31.687 "data_size": 7936 00:22:31.687 }, 00:22:31.687 { 00:22:31.687 "name": "BaseBdev2", 00:22:31.687 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:31.687 "is_configured": true, 00:22:31.687 "data_offset": 256, 00:22:31.687 "data_size": 7936 00:22:31.687 } 00:22:31.687 ] 00:22:31.687 }' 00:22:31.687 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.687 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.687 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.687 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.687 11:34:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:32.254 [2024-11-20 11:34:40.015515] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:32.254 [2024-11-20 11:34:40.015981] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:32.254 [2024-11-20 11:34:40.016219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.821 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.821 "name": "raid_bdev1", 00:22:32.821 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:32.821 "strip_size_kb": 0, 00:22:32.821 "state": "online", 00:22:32.821 "raid_level": "raid1", 00:22:32.821 "superblock": true, 00:22:32.821 "num_base_bdevs": 2, 00:22:32.821 "num_base_bdevs_discovered": 2, 00:22:32.821 "num_base_bdevs_operational": 2, 00:22:32.821 "base_bdevs_list": [ 00:22:32.822 { 00:22:32.822 "name": "spare", 00:22:32.822 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:32.822 "is_configured": true, 00:22:32.822 "data_offset": 256, 00:22:32.822 "data_size": 7936 00:22:32.822 }, 00:22:32.822 { 00:22:32.822 "name": "BaseBdev2", 00:22:32.822 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:32.822 "is_configured": true, 00:22:32.822 "data_offset": 256, 00:22:32.822 "data_size": 7936 00:22:32.822 } 00:22:32.822 ] 00:22:32.822 }' 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.822 "name": "raid_bdev1", 00:22:32.822 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:32.822 "strip_size_kb": 0, 00:22:32.822 "state": "online", 00:22:32.822 "raid_level": "raid1", 00:22:32.822 "superblock": true, 00:22:32.822 "num_base_bdevs": 2, 00:22:32.822 "num_base_bdevs_discovered": 2, 00:22:32.822 "num_base_bdevs_operational": 2, 00:22:32.822 "base_bdevs_list": [ 00:22:32.822 { 00:22:32.822 "name": "spare", 00:22:32.822 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:32.822 "is_configured": true, 00:22:32.822 "data_offset": 256, 00:22:32.822 "data_size": 7936 00:22:32.822 }, 00:22:32.822 { 00:22:32.822 "name": "BaseBdev2", 00:22:32.822 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:32.822 "is_configured": true, 00:22:32.822 "data_offset": 256, 00:22:32.822 "data_size": 7936 00:22:32.822 } 00:22:32.822 ] 00:22:32.822 }' 00:22:32.822 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.081 "name": "raid_bdev1", 00:22:33.081 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:33.081 "strip_size_kb": 0, 00:22:33.081 "state": "online", 00:22:33.081 "raid_level": "raid1", 00:22:33.081 "superblock": true, 00:22:33.081 "num_base_bdevs": 2, 00:22:33.081 "num_base_bdevs_discovered": 2, 00:22:33.081 "num_base_bdevs_operational": 2, 00:22:33.081 "base_bdevs_list": [ 00:22:33.081 { 00:22:33.081 "name": "spare", 00:22:33.081 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:33.081 "is_configured": true, 00:22:33.081 "data_offset": 256, 00:22:33.081 "data_size": 7936 00:22:33.081 }, 00:22:33.081 { 00:22:33.081 "name": "BaseBdev2", 00:22:33.081 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:33.081 "is_configured": true, 00:22:33.081 "data_offset": 256, 00:22:33.081 "data_size": 7936 00:22:33.081 } 00:22:33.081 ] 00:22:33.081 }' 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.081 11:34:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.648 [2024-11-20 11:34:41.244819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:33.648 [2024-11-20 11:34:41.245152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:33.648 [2024-11-20 11:34:41.245411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.648 [2024-11-20 11:34:41.245657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.648 [2024-11-20 11:34:41.245830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:33.648 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:33.914 /dev/nbd0 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:33.914 1+0 records in 00:22:33.914 1+0 records out 00:22:33.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722934 s, 5.7 MB/s 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:33.914 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:34.183 /dev/nbd1 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:34.183 1+0 records in 00:22:34.183 1+0 records out 00:22:34.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361764 s, 11.3 MB/s 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:34.183 11:34:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:34.442 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:34.442 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:34.442 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:34.442 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:34.442 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:34.442 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:34.442 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:34.717 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.301 [2024-11-20 11:34:42.872328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:35.301 [2024-11-20 11:34:42.872417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.301 [2024-11-20 11:34:42.872454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:35.301 [2024-11-20 11:34:42.872470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.301 [2024-11-20 11:34:42.875666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.301 [2024-11-20 11:34:42.875770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:35.301 [2024-11-20 11:34:42.875852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:35.301 [2024-11-20 11:34:42.875920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:35.301 [2024-11-20 11:34:42.876097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:35.301 spare 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.301 [2024-11-20 11:34:42.976248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:35.301 [2024-11-20 11:34:42.976722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:35.301 [2024-11-20 11:34:42.976990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:35.301 [2024-11-20 11:34:42.977261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:35.301 [2024-11-20 11:34:42.977276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:35.301 [2024-11-20 11:34:42.977528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.301 11:34:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.301 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.301 "name": "raid_bdev1", 00:22:35.301 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:35.301 "strip_size_kb": 0, 00:22:35.301 "state": "online", 00:22:35.301 "raid_level": "raid1", 00:22:35.301 "superblock": true, 00:22:35.301 "num_base_bdevs": 2, 00:22:35.301 "num_base_bdevs_discovered": 2, 00:22:35.301 "num_base_bdevs_operational": 2, 00:22:35.301 "base_bdevs_list": [ 00:22:35.301 { 00:22:35.301 "name": "spare", 00:22:35.301 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:35.301 "is_configured": true, 00:22:35.301 "data_offset": 256, 00:22:35.301 "data_size": 7936 00:22:35.301 }, 00:22:35.301 { 00:22:35.301 "name": "BaseBdev2", 00:22:35.301 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:35.301 "is_configured": true, 00:22:35.301 "data_offset": 256, 00:22:35.301 "data_size": 7936 00:22:35.301 } 00:22:35.301 ] 00:22:35.301 }' 00:22:35.301 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.302 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.867 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.867 "name": "raid_bdev1", 00:22:35.868 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:35.868 "strip_size_kb": 0, 00:22:35.868 "state": "online", 00:22:35.868 "raid_level": "raid1", 00:22:35.868 "superblock": true, 00:22:35.868 "num_base_bdevs": 2, 00:22:35.868 "num_base_bdevs_discovered": 2, 00:22:35.868 "num_base_bdevs_operational": 2, 00:22:35.868 "base_bdevs_list": [ 00:22:35.868 { 00:22:35.868 "name": "spare", 00:22:35.868 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:35.868 "is_configured": true, 00:22:35.868 "data_offset": 256, 00:22:35.868 "data_size": 7936 00:22:35.868 }, 00:22:35.868 { 00:22:35.868 "name": "BaseBdev2", 00:22:35.868 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:35.868 "is_configured": true, 00:22:35.868 "data_offset": 256, 00:22:35.868 "data_size": 7936 00:22:35.868 } 00:22:35.868 ] 00:22:35.868 }' 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.868 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.125 [2024-11-20 11:34:43.709942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.125 "name": "raid_bdev1", 00:22:36.125 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:36.125 "strip_size_kb": 0, 00:22:36.125 "state": "online", 00:22:36.125 "raid_level": "raid1", 00:22:36.125 "superblock": true, 00:22:36.125 "num_base_bdevs": 2, 00:22:36.125 "num_base_bdevs_discovered": 1, 00:22:36.125 "num_base_bdevs_operational": 1, 00:22:36.125 "base_bdevs_list": [ 00:22:36.125 { 00:22:36.125 "name": null, 00:22:36.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.125 "is_configured": false, 00:22:36.125 "data_offset": 0, 00:22:36.125 "data_size": 7936 00:22:36.125 }, 00:22:36.125 { 00:22:36.125 "name": "BaseBdev2", 00:22:36.125 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:36.125 "is_configured": true, 00:22:36.125 "data_offset": 256, 00:22:36.125 "data_size": 7936 00:22:36.125 } 00:22:36.125 ] 00:22:36.125 }' 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.125 11:34:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.691 11:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:36.691 11:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.691 11:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.691 [2024-11-20 11:34:44.230055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:36.691 [2024-11-20 11:34:44.230413] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:36.691 [2024-11-20 11:34:44.230442] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:36.691 [2024-11-20 11:34:44.230524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:36.691 [2024-11-20 11:34:44.242787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:36.691 11:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.691 11:34:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:36.691 [2024-11-20 11:34:44.245474] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.622 "name": "raid_bdev1", 00:22:37.622 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:37.622 "strip_size_kb": 0, 00:22:37.622 "state": "online", 00:22:37.622 "raid_level": "raid1", 00:22:37.622 "superblock": true, 00:22:37.622 "num_base_bdevs": 2, 00:22:37.622 "num_base_bdevs_discovered": 2, 00:22:37.622 "num_base_bdevs_operational": 2, 00:22:37.622 "process": { 00:22:37.622 "type": "rebuild", 00:22:37.622 "target": "spare", 00:22:37.622 "progress": { 00:22:37.622 "blocks": 2560, 00:22:37.622 "percent": 32 00:22:37.622 } 00:22:37.622 }, 00:22:37.622 "base_bdevs_list": [ 00:22:37.622 { 00:22:37.622 "name": "spare", 00:22:37.622 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:37.622 "is_configured": true, 00:22:37.622 "data_offset": 256, 00:22:37.622 "data_size": 7936 00:22:37.622 }, 00:22:37.622 { 00:22:37.622 "name": "BaseBdev2", 00:22:37.622 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:37.622 "is_configured": true, 00:22:37.622 "data_offset": 256, 00:22:37.622 "data_size": 7936 00:22:37.622 } 00:22:37.622 ] 00:22:37.622 }' 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.622 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.622 [2024-11-20 11:34:45.419361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:37.623 [2024-11-20 11:34:45.456976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:37.623 [2024-11-20 11:34:45.457054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.623 [2024-11-20 11:34:45.457093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:37.623 [2024-11-20 11:34:45.457136] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.880 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.880 "name": "raid_bdev1", 00:22:37.880 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:37.880 "strip_size_kb": 0, 00:22:37.880 "state": "online", 00:22:37.880 "raid_level": "raid1", 00:22:37.880 "superblock": true, 00:22:37.880 "num_base_bdevs": 2, 00:22:37.880 "num_base_bdevs_discovered": 1, 00:22:37.880 "num_base_bdevs_operational": 1, 00:22:37.880 "base_bdevs_list": [ 00:22:37.880 { 00:22:37.880 "name": null, 00:22:37.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.880 "is_configured": false, 00:22:37.880 "data_offset": 0, 00:22:37.880 "data_size": 7936 00:22:37.880 }, 00:22:37.880 { 00:22:37.881 "name": "BaseBdev2", 00:22:37.881 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:37.881 "is_configured": true, 00:22:37.881 "data_offset": 256, 00:22:37.881 "data_size": 7936 00:22:37.881 } 00:22:37.881 ] 00:22:37.881 }' 00:22:37.881 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.881 11:34:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.446 11:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:38.446 11:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.446 11:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.446 [2024-11-20 11:34:46.029133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:38.446 [2024-11-20 11:34:46.029238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.446 [2024-11-20 11:34:46.029281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:38.446 [2024-11-20 11:34:46.029301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.446 [2024-11-20 11:34:46.029769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.446 [2024-11-20 11:34:46.029802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:38.446 [2024-11-20 11:34:46.029889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:38.446 [2024-11-20 11:34:46.029921] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:38.446 [2024-11-20 11:34:46.029936] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:38.446 [2024-11-20 11:34:46.029968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:38.446 [2024-11-20 11:34:46.043492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:38.446 spare 00:22:38.446 11:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.446 11:34:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:38.446 [2024-11-20 11:34:46.046481] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:39.379 "name": "raid_bdev1", 00:22:39.379 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:39.379 "strip_size_kb": 0, 00:22:39.379 "state": "online", 00:22:39.379 "raid_level": "raid1", 00:22:39.379 "superblock": true, 00:22:39.379 "num_base_bdevs": 2, 00:22:39.379 "num_base_bdevs_discovered": 2, 00:22:39.379 "num_base_bdevs_operational": 2, 00:22:39.379 "process": { 00:22:39.379 "type": "rebuild", 00:22:39.379 "target": "spare", 00:22:39.379 "progress": { 00:22:39.379 "blocks": 2560, 00:22:39.379 "percent": 32 00:22:39.379 } 00:22:39.379 }, 00:22:39.379 "base_bdevs_list": [ 00:22:39.379 { 00:22:39.379 "name": "spare", 00:22:39.379 "uuid": "0f921fc6-8feb-5e64-9d6b-7470bcf2d055", 00:22:39.379 "is_configured": true, 00:22:39.379 "data_offset": 256, 00:22:39.379 "data_size": 7936 00:22:39.379 }, 00:22:39.379 { 00:22:39.379 "name": "BaseBdev2", 00:22:39.379 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:39.379 "is_configured": true, 00:22:39.379 "data_offset": 256, 00:22:39.379 "data_size": 7936 00:22:39.379 } 00:22:39.379 ] 00:22:39.379 }' 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.379 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.379 [2024-11-20 11:34:47.213572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:39.638 [2024-11-20 11:34:47.259343] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:39.638 [2024-11-20 11:34:47.259834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.638 [2024-11-20 11:34:47.259872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:39.638 [2024-11-20 11:34:47.259885] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.638 "name": "raid_bdev1", 00:22:39.638 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:39.638 "strip_size_kb": 0, 00:22:39.638 "state": "online", 00:22:39.638 "raid_level": "raid1", 00:22:39.638 "superblock": true, 00:22:39.638 "num_base_bdevs": 2, 00:22:39.638 "num_base_bdevs_discovered": 1, 00:22:39.638 "num_base_bdevs_operational": 1, 00:22:39.638 "base_bdevs_list": [ 00:22:39.638 { 00:22:39.638 "name": null, 00:22:39.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.638 "is_configured": false, 00:22:39.638 "data_offset": 0, 00:22:39.638 "data_size": 7936 00:22:39.638 }, 00:22:39.638 { 00:22:39.638 "name": "BaseBdev2", 00:22:39.638 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:39.638 "is_configured": true, 00:22:39.638 "data_offset": 256, 00:22:39.638 "data_size": 7936 00:22:39.638 } 00:22:39.638 ] 00:22:39.638 }' 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.638 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:40.207 "name": "raid_bdev1", 00:22:40.207 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:40.207 "strip_size_kb": 0, 00:22:40.207 "state": "online", 00:22:40.207 "raid_level": "raid1", 00:22:40.207 "superblock": true, 00:22:40.207 "num_base_bdevs": 2, 00:22:40.207 "num_base_bdevs_discovered": 1, 00:22:40.207 "num_base_bdevs_operational": 1, 00:22:40.207 "base_bdevs_list": [ 00:22:40.207 { 00:22:40.207 "name": null, 00:22:40.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.207 "is_configured": false, 00:22:40.207 "data_offset": 0, 00:22:40.207 "data_size": 7936 00:22:40.207 }, 00:22:40.207 { 00:22:40.207 "name": "BaseBdev2", 00:22:40.207 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:40.207 "is_configured": true, 00:22:40.207 "data_offset": 256, 00:22:40.207 "data_size": 7936 00:22:40.207 } 00:22:40.207 ] 00:22:40.207 }' 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.207 [2024-11-20 11:34:47.928346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:40.207 [2024-11-20 11:34:47.928479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.207 [2024-11-20 11:34:47.928523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:40.207 [2024-11-20 11:34:47.928539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.207 [2024-11-20 11:34:47.929127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.207 [2024-11-20 11:34:47.929153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:40.207 [2024-11-20 11:34:47.929269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:40.207 [2024-11-20 11:34:47.929307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:40.207 [2024-11-20 11:34:47.929323] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:40.207 [2024-11-20 11:34:47.929338] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:40.207 BaseBdev1 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.207 11:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.145 11:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.403 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.403 "name": "raid_bdev1", 00:22:41.403 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:41.403 "strip_size_kb": 0, 00:22:41.403 "state": "online", 00:22:41.403 "raid_level": "raid1", 00:22:41.403 "superblock": true, 00:22:41.403 "num_base_bdevs": 2, 00:22:41.403 "num_base_bdevs_discovered": 1, 00:22:41.403 "num_base_bdevs_operational": 1, 00:22:41.404 "base_bdevs_list": [ 00:22:41.404 { 00:22:41.404 "name": null, 00:22:41.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.404 "is_configured": false, 00:22:41.404 "data_offset": 0, 00:22:41.404 "data_size": 7936 00:22:41.404 }, 00:22:41.404 { 00:22:41.404 "name": "BaseBdev2", 00:22:41.404 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:41.404 "is_configured": true, 00:22:41.404 "data_offset": 256, 00:22:41.404 "data_size": 7936 00:22:41.404 } 00:22:41.404 ] 00:22:41.404 }' 00:22:41.404 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.404 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:41.674 "name": "raid_bdev1", 00:22:41.674 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:41.674 "strip_size_kb": 0, 00:22:41.674 "state": "online", 00:22:41.674 "raid_level": "raid1", 00:22:41.674 "superblock": true, 00:22:41.674 "num_base_bdevs": 2, 00:22:41.674 "num_base_bdevs_discovered": 1, 00:22:41.674 "num_base_bdevs_operational": 1, 00:22:41.674 "base_bdevs_list": [ 00:22:41.674 { 00:22:41.674 "name": null, 00:22:41.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.674 "is_configured": false, 00:22:41.674 "data_offset": 0, 00:22:41.674 "data_size": 7936 00:22:41.674 }, 00:22:41.674 { 00:22:41.674 "name": "BaseBdev2", 00:22:41.674 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:41.674 "is_configured": true, 00:22:41.674 "data_offset": 256, 00:22:41.674 "data_size": 7936 00:22:41.674 } 00:22:41.674 ] 00:22:41.674 }' 00:22:41.674 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.945 [2024-11-20 11:34:49.605100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.945 [2024-11-20 11:34:49.605587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:41.945 [2024-11-20 11:34:49.605631] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:41.945 request: 00:22:41.945 { 00:22:41.945 "base_bdev": "BaseBdev1", 00:22:41.945 "raid_bdev": "raid_bdev1", 00:22:41.945 "method": "bdev_raid_add_base_bdev", 00:22:41.945 "req_id": 1 00:22:41.945 } 00:22:41.945 Got JSON-RPC error response 00:22:41.945 response: 00:22:41.945 { 00:22:41.945 "code": -22, 00:22:41.945 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:41.945 } 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.945 11:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.885 "name": "raid_bdev1", 00:22:42.885 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:42.885 "strip_size_kb": 0, 00:22:42.885 "state": "online", 00:22:42.885 "raid_level": "raid1", 00:22:42.885 "superblock": true, 00:22:42.885 "num_base_bdevs": 2, 00:22:42.885 "num_base_bdevs_discovered": 1, 00:22:42.885 "num_base_bdevs_operational": 1, 00:22:42.885 "base_bdevs_list": [ 00:22:42.885 { 00:22:42.885 "name": null, 00:22:42.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.885 "is_configured": false, 00:22:42.885 "data_offset": 0, 00:22:42.885 "data_size": 7936 00:22:42.885 }, 00:22:42.885 { 00:22:42.885 "name": "BaseBdev2", 00:22:42.885 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:42.885 "is_configured": true, 00:22:42.885 "data_offset": 256, 00:22:42.885 "data_size": 7936 00:22:42.885 } 00:22:42.885 ] 00:22:42.885 }' 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.885 11:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:43.453 "name": "raid_bdev1", 00:22:43.453 "uuid": "9a384cab-7236-4e71-946d-31a8d64b76c1", 00:22:43.453 "strip_size_kb": 0, 00:22:43.453 "state": "online", 00:22:43.453 "raid_level": "raid1", 00:22:43.453 "superblock": true, 00:22:43.453 "num_base_bdevs": 2, 00:22:43.453 "num_base_bdevs_discovered": 1, 00:22:43.453 "num_base_bdevs_operational": 1, 00:22:43.453 "base_bdevs_list": [ 00:22:43.453 { 00:22:43.453 "name": null, 00:22:43.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.453 "is_configured": false, 00:22:43.453 "data_offset": 0, 00:22:43.453 "data_size": 7936 00:22:43.453 }, 00:22:43.453 { 00:22:43.453 "name": "BaseBdev2", 00:22:43.453 "uuid": "72535e7a-2601-56a7-8616-adf6fca1ee29", 00:22:43.453 "is_configured": true, 00:22:43.453 "data_offset": 256, 00:22:43.453 "data_size": 7936 00:22:43.453 } 00:22:43.453 ] 00:22:43.453 }' 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:43.453 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:43.712 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88147 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88147 ']' 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88147 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88147 00:22:43.713 killing process with pid 88147 00:22:43.713 Received shutdown signal, test time was about 60.000000 seconds 00:22:43.713 00:22:43.713 Latency(us) 00:22:43.713 [2024-11-20T11:34:51.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.713 [2024-11-20T11:34:51.559Z] =================================================================================================================== 00:22:43.713 [2024-11-20T11:34:51.559Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88147' 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88147 00:22:43.713 11:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88147 00:22:43.713 [2024-11-20 11:34:51.349315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:43.713 [2024-11-20 11:34:51.349531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.713 [2024-11-20 11:34:51.349607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.713 [2024-11-20 11:34:51.349645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:43.971 [2024-11-20 11:34:51.661458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:45.348 11:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:22:45.348 00:22:45.348 real 0m22.087s 00:22:45.348 user 0m30.004s 00:22:45.348 sys 0m2.643s 00:22:45.348 11:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.348 ************************************ 00:22:45.348 END TEST raid_rebuild_test_sb_md_separate 00:22:45.348 ************************************ 00:22:45.348 11:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:45.348 11:34:52 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:22:45.348 11:34:52 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:45.348 11:34:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:45.348 11:34:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.348 11:34:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:45.348 ************************************ 00:22:45.348 START TEST raid_state_function_test_sb_md_interleaved 00:22:45.348 ************************************ 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:45.348 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:45.349 Process raid pid: 88849 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88849 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88849' 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88849 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88849 ']' 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.349 11:34:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.349 [2024-11-20 11:34:52.952247] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:22:45.349 [2024-11-20 11:34:52.952401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.349 [2024-11-20 11:34:53.142970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.607 [2024-11-20 11:34:53.337291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.865 [2024-11-20 11:34:53.581897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.865 [2024-11-20 11:34:53.581971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.433 [2024-11-20 11:34:54.047621] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:46.433 [2024-11-20 11:34:54.047754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:46.433 [2024-11-20 11:34:54.047770] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.433 [2024-11-20 11:34:54.047785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.433 "name": "Existed_Raid", 00:22:46.433 "uuid": "af537a9a-b010-472a-be56-d181de93d0cc", 00:22:46.433 "strip_size_kb": 0, 00:22:46.433 "state": "configuring", 00:22:46.433 "raid_level": "raid1", 00:22:46.433 "superblock": true, 00:22:46.433 "num_base_bdevs": 2, 00:22:46.433 "num_base_bdevs_discovered": 0, 00:22:46.433 "num_base_bdevs_operational": 2, 00:22:46.433 "base_bdevs_list": [ 00:22:46.433 { 00:22:46.433 "name": "BaseBdev1", 00:22:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.433 "is_configured": false, 00:22:46.433 "data_offset": 0, 00:22:46.433 "data_size": 0 00:22:46.433 }, 00:22:46.433 { 00:22:46.433 "name": "BaseBdev2", 00:22:46.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.433 "is_configured": false, 00:22:46.433 "data_offset": 0, 00:22:46.433 "data_size": 0 00:22:46.433 } 00:22:46.433 ] 00:22:46.433 }' 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.433 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.001 [2024-11-20 11:34:54.575767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:47.001 [2024-11-20 11:34:54.575850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.001 [2024-11-20 11:34:54.587641] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:47.001 [2024-11-20 11:34:54.587874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:47.001 [2024-11-20 11:34:54.588011] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:47.001 [2024-11-20 11:34:54.588083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.001 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.001 [2024-11-20 11:34:54.641503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.002 BaseBdev1 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.002 [ 00:22:47.002 { 00:22:47.002 "name": "BaseBdev1", 00:22:47.002 "aliases": [ 00:22:47.002 "f84cf566-e38f-4c33-a0e7-c2603570a710" 00:22:47.002 ], 00:22:47.002 "product_name": "Malloc disk", 00:22:47.002 "block_size": 4128, 00:22:47.002 "num_blocks": 8192, 00:22:47.002 "uuid": "f84cf566-e38f-4c33-a0e7-c2603570a710", 00:22:47.002 "md_size": 32, 00:22:47.002 "md_interleave": true, 00:22:47.002 "dif_type": 0, 00:22:47.002 "assigned_rate_limits": { 00:22:47.002 "rw_ios_per_sec": 0, 00:22:47.002 "rw_mbytes_per_sec": 0, 00:22:47.002 "r_mbytes_per_sec": 0, 00:22:47.002 "w_mbytes_per_sec": 0 00:22:47.002 }, 00:22:47.002 "claimed": true, 00:22:47.002 "claim_type": "exclusive_write", 00:22:47.002 "zoned": false, 00:22:47.002 "supported_io_types": { 00:22:47.002 "read": true, 00:22:47.002 "write": true, 00:22:47.002 "unmap": true, 00:22:47.002 "flush": true, 00:22:47.002 "reset": true, 00:22:47.002 "nvme_admin": false, 00:22:47.002 "nvme_io": false, 00:22:47.002 "nvme_io_md": false, 00:22:47.002 "write_zeroes": true, 00:22:47.002 "zcopy": true, 00:22:47.002 "get_zone_info": false, 00:22:47.002 "zone_management": false, 00:22:47.002 "zone_append": false, 00:22:47.002 "compare": false, 00:22:47.002 "compare_and_write": false, 00:22:47.002 "abort": true, 00:22:47.002 "seek_hole": false, 00:22:47.002 "seek_data": false, 00:22:47.002 "copy": true, 00:22:47.002 "nvme_iov_md": false 00:22:47.002 }, 00:22:47.002 "memory_domains": [ 00:22:47.002 { 00:22:47.002 "dma_device_id": "system", 00:22:47.002 "dma_device_type": 1 00:22:47.002 }, 00:22:47.002 { 00:22:47.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.002 "dma_device_type": 2 00:22:47.002 } 00:22:47.002 ], 00:22:47.002 "driver_specific": {} 00:22:47.002 } 00:22:47.002 ] 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.002 "name": "Existed_Raid", 00:22:47.002 "uuid": "5ee8de08-f94e-401c-be5c-da2c919231e7", 00:22:47.002 "strip_size_kb": 0, 00:22:47.002 "state": "configuring", 00:22:47.002 "raid_level": "raid1", 00:22:47.002 "superblock": true, 00:22:47.002 "num_base_bdevs": 2, 00:22:47.002 "num_base_bdevs_discovered": 1, 00:22:47.002 "num_base_bdevs_operational": 2, 00:22:47.002 "base_bdevs_list": [ 00:22:47.002 { 00:22:47.002 "name": "BaseBdev1", 00:22:47.002 "uuid": "f84cf566-e38f-4c33-a0e7-c2603570a710", 00:22:47.002 "is_configured": true, 00:22:47.002 "data_offset": 256, 00:22:47.002 "data_size": 7936 00:22:47.002 }, 00:22:47.002 { 00:22:47.002 "name": "BaseBdev2", 00:22:47.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.002 "is_configured": false, 00:22:47.002 "data_offset": 0, 00:22:47.002 "data_size": 0 00:22:47.002 } 00:22:47.002 ] 00:22:47.002 }' 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.002 11:34:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.570 [2024-11-20 11:34:55.189818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:47.570 [2024-11-20 11:34:55.190379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.570 [2024-11-20 11:34:55.201827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.570 [2024-11-20 11:34:55.204600] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:47.570 [2024-11-20 11:34:55.204787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.570 "name": "Existed_Raid", 00:22:47.570 "uuid": "5ed2e3f6-935d-4c17-8f4f-022bdb7eb9e2", 00:22:47.570 "strip_size_kb": 0, 00:22:47.570 "state": "configuring", 00:22:47.570 "raid_level": "raid1", 00:22:47.570 "superblock": true, 00:22:47.570 "num_base_bdevs": 2, 00:22:47.570 "num_base_bdevs_discovered": 1, 00:22:47.570 "num_base_bdevs_operational": 2, 00:22:47.570 "base_bdevs_list": [ 00:22:47.570 { 00:22:47.570 "name": "BaseBdev1", 00:22:47.570 "uuid": "f84cf566-e38f-4c33-a0e7-c2603570a710", 00:22:47.570 "is_configured": true, 00:22:47.570 "data_offset": 256, 00:22:47.570 "data_size": 7936 00:22:47.570 }, 00:22:47.570 { 00:22:47.570 "name": "BaseBdev2", 00:22:47.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.570 "is_configured": false, 00:22:47.570 "data_offset": 0, 00:22:47.570 "data_size": 0 00:22:47.570 } 00:22:47.570 ] 00:22:47.570 }' 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.570 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.137 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:48.137 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.137 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.138 [2024-11-20 11:34:55.795775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:48.138 [2024-11-20 11:34:55.796100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:48.138 [2024-11-20 11:34:55.796121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:48.138 [2024-11-20 11:34:55.796280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:48.138 [2024-11-20 11:34:55.796394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:48.138 [2024-11-20 11:34:55.796414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:48.138 BaseBdev2 00:22:48.138 [2024-11-20 11:34:55.796505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.138 [ 00:22:48.138 { 00:22:48.138 "name": "BaseBdev2", 00:22:48.138 "aliases": [ 00:22:48.138 "a760b6ca-7163-42b7-a3d5-c7fed6c6ac52" 00:22:48.138 ], 00:22:48.138 "product_name": "Malloc disk", 00:22:48.138 "block_size": 4128, 00:22:48.138 "num_blocks": 8192, 00:22:48.138 "uuid": "a760b6ca-7163-42b7-a3d5-c7fed6c6ac52", 00:22:48.138 "md_size": 32, 00:22:48.138 "md_interleave": true, 00:22:48.138 "dif_type": 0, 00:22:48.138 "assigned_rate_limits": { 00:22:48.138 "rw_ios_per_sec": 0, 00:22:48.138 "rw_mbytes_per_sec": 0, 00:22:48.138 "r_mbytes_per_sec": 0, 00:22:48.138 "w_mbytes_per_sec": 0 00:22:48.138 }, 00:22:48.138 "claimed": true, 00:22:48.138 "claim_type": "exclusive_write", 00:22:48.138 "zoned": false, 00:22:48.138 "supported_io_types": { 00:22:48.138 "read": true, 00:22:48.138 "write": true, 00:22:48.138 "unmap": true, 00:22:48.138 "flush": true, 00:22:48.138 "reset": true, 00:22:48.138 "nvme_admin": false, 00:22:48.138 "nvme_io": false, 00:22:48.138 "nvme_io_md": false, 00:22:48.138 "write_zeroes": true, 00:22:48.138 "zcopy": true, 00:22:48.138 "get_zone_info": false, 00:22:48.138 "zone_management": false, 00:22:48.138 "zone_append": false, 00:22:48.138 "compare": false, 00:22:48.138 "compare_and_write": false, 00:22:48.138 "abort": true, 00:22:48.138 "seek_hole": false, 00:22:48.138 "seek_data": false, 00:22:48.138 "copy": true, 00:22:48.138 "nvme_iov_md": false 00:22:48.138 }, 00:22:48.138 "memory_domains": [ 00:22:48.138 { 00:22:48.138 "dma_device_id": "system", 00:22:48.138 "dma_device_type": 1 00:22:48.138 }, 00:22:48.138 { 00:22:48.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.138 "dma_device_type": 2 00:22:48.138 } 00:22:48.138 ], 00:22:48.138 "driver_specific": {} 00:22:48.138 } 00:22:48.138 ] 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.138 "name": "Existed_Raid", 00:22:48.138 "uuid": "5ed2e3f6-935d-4c17-8f4f-022bdb7eb9e2", 00:22:48.138 "strip_size_kb": 0, 00:22:48.138 "state": "online", 00:22:48.138 "raid_level": "raid1", 00:22:48.138 "superblock": true, 00:22:48.138 "num_base_bdevs": 2, 00:22:48.138 "num_base_bdevs_discovered": 2, 00:22:48.138 "num_base_bdevs_operational": 2, 00:22:48.138 "base_bdevs_list": [ 00:22:48.138 { 00:22:48.138 "name": "BaseBdev1", 00:22:48.138 "uuid": "f84cf566-e38f-4c33-a0e7-c2603570a710", 00:22:48.138 "is_configured": true, 00:22:48.138 "data_offset": 256, 00:22:48.138 "data_size": 7936 00:22:48.138 }, 00:22:48.138 { 00:22:48.138 "name": "BaseBdev2", 00:22:48.138 "uuid": "a760b6ca-7163-42b7-a3d5-c7fed6c6ac52", 00:22:48.138 "is_configured": true, 00:22:48.138 "data_offset": 256, 00:22:48.138 "data_size": 7936 00:22:48.138 } 00:22:48.138 ] 00:22:48.138 }' 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.138 11:34:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.736 [2024-11-20 11:34:56.308483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.736 "name": "Existed_Raid", 00:22:48.736 "aliases": [ 00:22:48.736 "5ed2e3f6-935d-4c17-8f4f-022bdb7eb9e2" 00:22:48.736 ], 00:22:48.736 "product_name": "Raid Volume", 00:22:48.736 "block_size": 4128, 00:22:48.736 "num_blocks": 7936, 00:22:48.736 "uuid": "5ed2e3f6-935d-4c17-8f4f-022bdb7eb9e2", 00:22:48.736 "md_size": 32, 00:22:48.736 "md_interleave": true, 00:22:48.736 "dif_type": 0, 00:22:48.736 "assigned_rate_limits": { 00:22:48.736 "rw_ios_per_sec": 0, 00:22:48.736 "rw_mbytes_per_sec": 0, 00:22:48.736 "r_mbytes_per_sec": 0, 00:22:48.736 "w_mbytes_per_sec": 0 00:22:48.736 }, 00:22:48.736 "claimed": false, 00:22:48.736 "zoned": false, 00:22:48.736 "supported_io_types": { 00:22:48.736 "read": true, 00:22:48.736 "write": true, 00:22:48.736 "unmap": false, 00:22:48.736 "flush": false, 00:22:48.736 "reset": true, 00:22:48.736 "nvme_admin": false, 00:22:48.736 "nvme_io": false, 00:22:48.736 "nvme_io_md": false, 00:22:48.736 "write_zeroes": true, 00:22:48.736 "zcopy": false, 00:22:48.736 "get_zone_info": false, 00:22:48.736 "zone_management": false, 00:22:48.736 "zone_append": false, 00:22:48.736 "compare": false, 00:22:48.736 "compare_and_write": false, 00:22:48.736 "abort": false, 00:22:48.736 "seek_hole": false, 00:22:48.736 "seek_data": false, 00:22:48.736 "copy": false, 00:22:48.736 "nvme_iov_md": false 00:22:48.736 }, 00:22:48.736 "memory_domains": [ 00:22:48.736 { 00:22:48.736 "dma_device_id": "system", 00:22:48.736 "dma_device_type": 1 00:22:48.736 }, 00:22:48.736 { 00:22:48.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.736 "dma_device_type": 2 00:22:48.736 }, 00:22:48.736 { 00:22:48.736 "dma_device_id": "system", 00:22:48.736 "dma_device_type": 1 00:22:48.736 }, 00:22:48.736 { 00:22:48.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.736 "dma_device_type": 2 00:22:48.736 } 00:22:48.736 ], 00:22:48.736 "driver_specific": { 00:22:48.736 "raid": { 00:22:48.736 "uuid": "5ed2e3f6-935d-4c17-8f4f-022bdb7eb9e2", 00:22:48.736 "strip_size_kb": 0, 00:22:48.736 "state": "online", 00:22:48.736 "raid_level": "raid1", 00:22:48.736 "superblock": true, 00:22:48.736 "num_base_bdevs": 2, 00:22:48.736 "num_base_bdevs_discovered": 2, 00:22:48.736 "num_base_bdevs_operational": 2, 00:22:48.736 "base_bdevs_list": [ 00:22:48.736 { 00:22:48.736 "name": "BaseBdev1", 00:22:48.736 "uuid": "f84cf566-e38f-4c33-a0e7-c2603570a710", 00:22:48.736 "is_configured": true, 00:22:48.736 "data_offset": 256, 00:22:48.736 "data_size": 7936 00:22:48.736 }, 00:22:48.736 { 00:22:48.736 "name": "BaseBdev2", 00:22:48.736 "uuid": "a760b6ca-7163-42b7-a3d5-c7fed6c6ac52", 00:22:48.736 "is_configured": true, 00:22:48.736 "data_offset": 256, 00:22:48.736 "data_size": 7936 00:22:48.736 } 00:22:48.736 ] 00:22:48.736 } 00:22:48.736 } 00:22:48.736 }' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:48.736 BaseBdev2' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.736 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.737 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:48.737 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:48.737 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:48.737 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.737 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.737 [2024-11-20 11:34:56.552216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.996 "name": "Existed_Raid", 00:22:48.996 "uuid": "5ed2e3f6-935d-4c17-8f4f-022bdb7eb9e2", 00:22:48.996 "strip_size_kb": 0, 00:22:48.996 "state": "online", 00:22:48.996 "raid_level": "raid1", 00:22:48.996 "superblock": true, 00:22:48.996 "num_base_bdevs": 2, 00:22:48.996 "num_base_bdevs_discovered": 1, 00:22:48.996 "num_base_bdevs_operational": 1, 00:22:48.996 "base_bdevs_list": [ 00:22:48.996 { 00:22:48.996 "name": null, 00:22:48.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.996 "is_configured": false, 00:22:48.996 "data_offset": 0, 00:22:48.996 "data_size": 7936 00:22:48.996 }, 00:22:48.996 { 00:22:48.996 "name": "BaseBdev2", 00:22:48.996 "uuid": "a760b6ca-7163-42b7-a3d5-c7fed6c6ac52", 00:22:48.996 "is_configured": true, 00:22:48.996 "data_offset": 256, 00:22:48.996 "data_size": 7936 00:22:48.996 } 00:22:48.996 ] 00:22:48.996 }' 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.996 11:34:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.565 [2024-11-20 11:34:57.230546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:49.565 [2024-11-20 11:34:57.230773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:49.565 [2024-11-20 11:34:57.319507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.565 [2024-11-20 11:34:57.319588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.565 [2024-11-20 11:34:57.319607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88849 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88849 ']' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88849 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88849 00:22:49.565 killing process with pid 88849 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88849' 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88849 00:22:49.565 [2024-11-20 11:34:57.407937] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.565 11:34:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88849 00:22:49.824 [2024-11-20 11:34:57.423594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:50.760 11:34:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:22:50.760 00:22:50.760 real 0m5.705s 00:22:50.760 user 0m8.508s 00:22:50.761 sys 0m0.885s 00:22:50.761 11:34:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.761 ************************************ 00:22:50.761 END TEST raid_state_function_test_sb_md_interleaved 00:22:50.761 ************************************ 00:22:50.761 11:34:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.761 11:34:58 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:50.761 11:34:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:50.761 11:34:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.761 11:34:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:51.020 ************************************ 00:22:51.020 START TEST raid_superblock_test_md_interleaved 00:22:51.020 ************************************ 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89108 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89108 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89108 ']' 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.020 11:34:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.020 [2024-11-20 11:34:58.714665] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:22:51.020 [2024-11-20 11:34:58.715767] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89108 ] 00:22:51.279 [2024-11-20 11:34:58.891361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.279 [2024-11-20 11:34:59.038781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.536 [2024-11-20 11:34:59.264987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.536 [2024-11-20 11:34:59.265097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.104 malloc1 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.104 [2024-11-20 11:34:59.874797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:52.104 [2024-11-20 11:34:59.875638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.104 [2024-11-20 11:34:59.875694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:52.104 [2024-11-20 11:34:59.875714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.104 [2024-11-20 11:34:59.878364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.104 [2024-11-20 11:34:59.878411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:52.104 pt1 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:52.104 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.105 malloc2 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.105 [2024-11-20 11:34:59.930640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:52.105 [2024-11-20 11:34:59.930739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.105 [2024-11-20 11:34:59.930773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:52.105 [2024-11-20 11:34:59.930788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.105 [2024-11-20 11:34:59.933363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.105 [2024-11-20 11:34:59.933645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:52.105 pt2 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.105 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.105 [2024-11-20 11:34:59.942689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:52.105 [2024-11-20 11:34:59.945258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:52.105 [2024-11-20 11:34:59.945706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:52.105 [2024-11-20 11:34:59.945732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:52.105 [2024-11-20 11:34:59.945841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:52.105 [2024-11-20 11:34:59.945947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:52.105 [2024-11-20 11:34:59.945967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:52.105 [2024-11-20 11:34:59.946069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.364 "name": "raid_bdev1", 00:22:52.364 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:52.364 "strip_size_kb": 0, 00:22:52.364 "state": "online", 00:22:52.364 "raid_level": "raid1", 00:22:52.364 "superblock": true, 00:22:52.364 "num_base_bdevs": 2, 00:22:52.364 "num_base_bdevs_discovered": 2, 00:22:52.364 "num_base_bdevs_operational": 2, 00:22:52.364 "base_bdevs_list": [ 00:22:52.364 { 00:22:52.364 "name": "pt1", 00:22:52.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:52.364 "is_configured": true, 00:22:52.364 "data_offset": 256, 00:22:52.364 "data_size": 7936 00:22:52.364 }, 00:22:52.364 { 00:22:52.364 "name": "pt2", 00:22:52.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:52.364 "is_configured": true, 00:22:52.364 "data_offset": 256, 00:22:52.364 "data_size": 7936 00:22:52.364 } 00:22:52.364 ] 00:22:52.364 }' 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.364 11:34:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.623 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:52.623 [2024-11-20 11:35:00.455256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:52.882 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.882 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:52.882 "name": "raid_bdev1", 00:22:52.882 "aliases": [ 00:22:52.882 "3a713b19-9d5e-4fa5-9f48-5b7a12980d69" 00:22:52.882 ], 00:22:52.882 "product_name": "Raid Volume", 00:22:52.882 "block_size": 4128, 00:22:52.882 "num_blocks": 7936, 00:22:52.882 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:52.882 "md_size": 32, 00:22:52.882 "md_interleave": true, 00:22:52.882 "dif_type": 0, 00:22:52.882 "assigned_rate_limits": { 00:22:52.882 "rw_ios_per_sec": 0, 00:22:52.882 "rw_mbytes_per_sec": 0, 00:22:52.882 "r_mbytes_per_sec": 0, 00:22:52.882 "w_mbytes_per_sec": 0 00:22:52.882 }, 00:22:52.882 "claimed": false, 00:22:52.882 "zoned": false, 00:22:52.883 "supported_io_types": { 00:22:52.883 "read": true, 00:22:52.883 "write": true, 00:22:52.883 "unmap": false, 00:22:52.883 "flush": false, 00:22:52.883 "reset": true, 00:22:52.883 "nvme_admin": false, 00:22:52.883 "nvme_io": false, 00:22:52.883 "nvme_io_md": false, 00:22:52.883 "write_zeroes": true, 00:22:52.883 "zcopy": false, 00:22:52.883 "get_zone_info": false, 00:22:52.883 "zone_management": false, 00:22:52.883 "zone_append": false, 00:22:52.883 "compare": false, 00:22:52.883 "compare_and_write": false, 00:22:52.883 "abort": false, 00:22:52.883 "seek_hole": false, 00:22:52.883 "seek_data": false, 00:22:52.883 "copy": false, 00:22:52.883 "nvme_iov_md": false 00:22:52.883 }, 00:22:52.883 "memory_domains": [ 00:22:52.883 { 00:22:52.883 "dma_device_id": "system", 00:22:52.883 "dma_device_type": 1 00:22:52.883 }, 00:22:52.883 { 00:22:52.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.883 "dma_device_type": 2 00:22:52.883 }, 00:22:52.883 { 00:22:52.883 "dma_device_id": "system", 00:22:52.883 "dma_device_type": 1 00:22:52.883 }, 00:22:52.883 { 00:22:52.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.883 "dma_device_type": 2 00:22:52.883 } 00:22:52.883 ], 00:22:52.883 "driver_specific": { 00:22:52.883 "raid": { 00:22:52.883 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:52.883 "strip_size_kb": 0, 00:22:52.883 "state": "online", 00:22:52.883 "raid_level": "raid1", 00:22:52.883 "superblock": true, 00:22:52.883 "num_base_bdevs": 2, 00:22:52.883 "num_base_bdevs_discovered": 2, 00:22:52.883 "num_base_bdevs_operational": 2, 00:22:52.883 "base_bdevs_list": [ 00:22:52.883 { 00:22:52.883 "name": "pt1", 00:22:52.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:52.883 "is_configured": true, 00:22:52.883 "data_offset": 256, 00:22:52.883 "data_size": 7936 00:22:52.883 }, 00:22:52.883 { 00:22:52.883 "name": "pt2", 00:22:52.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:52.883 "is_configured": true, 00:22:52.883 "data_offset": 256, 00:22:52.883 "data_size": 7936 00:22:52.883 } 00:22:52.883 ] 00:22:52.883 } 00:22:52.883 } 00:22:52.883 }' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:52.883 pt2' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:52.883 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.883 [2024-11-20 11:35:00.711302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a713b19-9d5e-4fa5-9f48-5b7a12980d69 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 3a713b19-9d5e-4fa5-9f48-5b7a12980d69 ']' 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.143 [2024-11-20 11:35:00.754911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:53.143 [2024-11-20 11:35:00.755178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:53.143 [2024-11-20 11:35:00.755352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:53.143 [2024-11-20 11:35:00.755442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:53.143 [2024-11-20 11:35:00.755463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.143 [2024-11-20 11:35:00.882975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:53.143 [2024-11-20 11:35:00.886104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:53.143 [2024-11-20 11:35:00.886234] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:53.143 [2024-11-20 11:35:00.886328] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:53.143 [2024-11-20 11:35:00.886356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:53.143 [2024-11-20 11:35:00.886373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:53.143 request: 00:22:53.143 { 00:22:53.143 "name": "raid_bdev1", 00:22:53.143 "raid_level": "raid1", 00:22:53.143 "base_bdevs": [ 00:22:53.143 "malloc1", 00:22:53.143 "malloc2" 00:22:53.143 ], 00:22:53.143 "superblock": false, 00:22:53.143 "method": "bdev_raid_create", 00:22:53.143 "req_id": 1 00:22:53.143 } 00:22:53.143 Got JSON-RPC error response 00:22:53.143 response: 00:22:53.143 { 00:22:53.143 "code": -17, 00:22:53.143 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:53.143 } 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:53.143 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.144 [2024-11-20 11:35:00.943127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:53.144 [2024-11-20 11:35:00.943454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.144 [2024-11-20 11:35:00.943660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:53.144 [2024-11-20 11:35:00.943796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.144 [2024-11-20 11:35:00.946696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.144 [2024-11-20 11:35:00.946871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:53.144 [2024-11-20 11:35:00.947087] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:53.144 [2024-11-20 11:35:00.947280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:53.144 pt1 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.144 11:35:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.403 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.403 "name": "raid_bdev1", 00:22:53.403 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:53.403 "strip_size_kb": 0, 00:22:53.403 "state": "configuring", 00:22:53.403 "raid_level": "raid1", 00:22:53.403 "superblock": true, 00:22:53.403 "num_base_bdevs": 2, 00:22:53.403 "num_base_bdevs_discovered": 1, 00:22:53.403 "num_base_bdevs_operational": 2, 00:22:53.403 "base_bdevs_list": [ 00:22:53.403 { 00:22:53.403 "name": "pt1", 00:22:53.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:53.403 "is_configured": true, 00:22:53.403 "data_offset": 256, 00:22:53.403 "data_size": 7936 00:22:53.403 }, 00:22:53.403 { 00:22:53.403 "name": null, 00:22:53.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:53.403 "is_configured": false, 00:22:53.403 "data_offset": 256, 00:22:53.403 "data_size": 7936 00:22:53.403 } 00:22:53.403 ] 00:22:53.403 }' 00:22:53.403 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.403 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.662 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:53.662 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:53.662 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:53.662 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:53.662 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.662 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.662 [2024-11-20 11:35:01.467391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:53.662 [2024-11-20 11:35:01.467564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.662 [2024-11-20 11:35:01.467601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:53.663 [2024-11-20 11:35:01.467643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.663 [2024-11-20 11:35:01.467936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.663 [2024-11-20 11:35:01.467964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:53.663 [2024-11-20 11:35:01.468048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:53.663 [2024-11-20 11:35:01.468092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:53.663 [2024-11-20 11:35:01.468231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:53.663 [2024-11-20 11:35:01.468252] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:53.663 [2024-11-20 11:35:01.468345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:53.663 [2024-11-20 11:35:01.468465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:53.663 [2024-11-20 11:35:01.468489] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:53.663 [2024-11-20 11:35:01.468585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.663 pt2 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.663 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.922 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.922 "name": "raid_bdev1", 00:22:53.922 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:53.922 "strip_size_kb": 0, 00:22:53.922 "state": "online", 00:22:53.922 "raid_level": "raid1", 00:22:53.922 "superblock": true, 00:22:53.922 "num_base_bdevs": 2, 00:22:53.922 "num_base_bdevs_discovered": 2, 00:22:53.922 "num_base_bdevs_operational": 2, 00:22:53.922 "base_bdevs_list": [ 00:22:53.922 { 00:22:53.922 "name": "pt1", 00:22:53.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:53.922 "is_configured": true, 00:22:53.922 "data_offset": 256, 00:22:53.922 "data_size": 7936 00:22:53.922 }, 00:22:53.922 { 00:22:53.922 "name": "pt2", 00:22:53.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:53.922 "is_configured": true, 00:22:53.922 "data_offset": 256, 00:22:53.922 "data_size": 7936 00:22:53.922 } 00:22:53.922 ] 00:22:53.922 }' 00:22:53.922 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.922 11:35:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.488 [2024-11-20 11:35:02.043931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.488 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.488 "name": "raid_bdev1", 00:22:54.488 "aliases": [ 00:22:54.488 "3a713b19-9d5e-4fa5-9f48-5b7a12980d69" 00:22:54.488 ], 00:22:54.488 "product_name": "Raid Volume", 00:22:54.488 "block_size": 4128, 00:22:54.489 "num_blocks": 7936, 00:22:54.489 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:54.489 "md_size": 32, 00:22:54.489 "md_interleave": true, 00:22:54.489 "dif_type": 0, 00:22:54.489 "assigned_rate_limits": { 00:22:54.489 "rw_ios_per_sec": 0, 00:22:54.489 "rw_mbytes_per_sec": 0, 00:22:54.489 "r_mbytes_per_sec": 0, 00:22:54.489 "w_mbytes_per_sec": 0 00:22:54.489 }, 00:22:54.489 "claimed": false, 00:22:54.489 "zoned": false, 00:22:54.489 "supported_io_types": { 00:22:54.489 "read": true, 00:22:54.489 "write": true, 00:22:54.489 "unmap": false, 00:22:54.489 "flush": false, 00:22:54.489 "reset": true, 00:22:54.489 "nvme_admin": false, 00:22:54.489 "nvme_io": false, 00:22:54.489 "nvme_io_md": false, 00:22:54.489 "write_zeroes": true, 00:22:54.489 "zcopy": false, 00:22:54.489 "get_zone_info": false, 00:22:54.489 "zone_management": false, 00:22:54.489 "zone_append": false, 00:22:54.489 "compare": false, 00:22:54.489 "compare_and_write": false, 00:22:54.489 "abort": false, 00:22:54.489 "seek_hole": false, 00:22:54.489 "seek_data": false, 00:22:54.489 "copy": false, 00:22:54.489 "nvme_iov_md": false 00:22:54.489 }, 00:22:54.489 "memory_domains": [ 00:22:54.489 { 00:22:54.489 "dma_device_id": "system", 00:22:54.489 "dma_device_type": 1 00:22:54.489 }, 00:22:54.489 { 00:22:54.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.489 "dma_device_type": 2 00:22:54.489 }, 00:22:54.489 { 00:22:54.489 "dma_device_id": "system", 00:22:54.489 "dma_device_type": 1 00:22:54.489 }, 00:22:54.489 { 00:22:54.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.489 "dma_device_type": 2 00:22:54.489 } 00:22:54.489 ], 00:22:54.489 "driver_specific": { 00:22:54.489 "raid": { 00:22:54.489 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:54.489 "strip_size_kb": 0, 00:22:54.489 "state": "online", 00:22:54.489 "raid_level": "raid1", 00:22:54.489 "superblock": true, 00:22:54.489 "num_base_bdevs": 2, 00:22:54.489 "num_base_bdevs_discovered": 2, 00:22:54.489 "num_base_bdevs_operational": 2, 00:22:54.489 "base_bdevs_list": [ 00:22:54.489 { 00:22:54.489 "name": "pt1", 00:22:54.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:54.489 "is_configured": true, 00:22:54.489 "data_offset": 256, 00:22:54.489 "data_size": 7936 00:22:54.489 }, 00:22:54.489 { 00:22:54.489 "name": "pt2", 00:22:54.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:54.489 "is_configured": true, 00:22:54.489 "data_offset": 256, 00:22:54.489 "data_size": 7936 00:22:54.489 } 00:22:54.489 ] 00:22:54.489 } 00:22:54.489 } 00:22:54.489 }' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:54.489 pt2' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.489 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.489 [2024-11-20 11:35:02.323897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 3a713b19-9d5e-4fa5-9f48-5b7a12980d69 '!=' 3a713b19-9d5e-4fa5-9f48-5b7a12980d69 ']' 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.748 [2024-11-20 11:35:02.371605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.748 "name": "raid_bdev1", 00:22:54.748 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:54.748 "strip_size_kb": 0, 00:22:54.748 "state": "online", 00:22:54.748 "raid_level": "raid1", 00:22:54.748 "superblock": true, 00:22:54.748 "num_base_bdevs": 2, 00:22:54.748 "num_base_bdevs_discovered": 1, 00:22:54.748 "num_base_bdevs_operational": 1, 00:22:54.748 "base_bdevs_list": [ 00:22:54.748 { 00:22:54.748 "name": null, 00:22:54.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.748 "is_configured": false, 00:22:54.748 "data_offset": 0, 00:22:54.748 "data_size": 7936 00:22:54.748 }, 00:22:54.748 { 00:22:54.748 "name": "pt2", 00:22:54.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:54.748 "is_configured": true, 00:22:54.748 "data_offset": 256, 00:22:54.748 "data_size": 7936 00:22:54.748 } 00:22:54.748 ] 00:22:54.748 }' 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.748 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 [2024-11-20 11:35:02.907817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.315 [2024-11-20 11:35:02.907884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.315 [2024-11-20 11:35:02.908010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.315 [2024-11-20 11:35:02.908087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.315 [2024-11-20 11:35:02.908109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.315 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 [2024-11-20 11:35:02.983767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:55.315 [2024-11-20 11:35:02.984166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.315 [2024-11-20 11:35:02.984203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:55.315 [2024-11-20 11:35:02.984223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.315 [2024-11-20 11:35:02.987048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.315 [2024-11-20 11:35:02.987094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:55.315 [2024-11-20 11:35:02.987172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:55.315 [2024-11-20 11:35:02.987239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:55.315 [2024-11-20 11:35:02.987336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:55.315 [2024-11-20 11:35:02.987357] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:55.315 [2024-11-20 11:35:02.987479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:55.315 [2024-11-20 11:35:02.987568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:55.315 [2024-11-20 11:35:02.987582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:55.316 pt2 00:22:55.316 [2024-11-20 11:35:02.987729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.316 11:35:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.316 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.316 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.316 "name": "raid_bdev1", 00:22:55.316 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:55.316 "strip_size_kb": 0, 00:22:55.316 "state": "online", 00:22:55.316 "raid_level": "raid1", 00:22:55.316 "superblock": true, 00:22:55.316 "num_base_bdevs": 2, 00:22:55.316 "num_base_bdevs_discovered": 1, 00:22:55.316 "num_base_bdevs_operational": 1, 00:22:55.316 "base_bdevs_list": [ 00:22:55.316 { 00:22:55.316 "name": null, 00:22:55.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.316 "is_configured": false, 00:22:55.316 "data_offset": 256, 00:22:55.316 "data_size": 7936 00:22:55.316 }, 00:22:55.316 { 00:22:55.316 "name": "pt2", 00:22:55.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.316 "is_configured": true, 00:22:55.316 "data_offset": 256, 00:22:55.316 "data_size": 7936 00:22:55.316 } 00:22:55.316 ] 00:22:55.316 }' 00:22:55.316 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.316 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 [2024-11-20 11:35:03.543919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.934 [2024-11-20 11:35:03.544183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.934 [2024-11-20 11:35:03.544307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.934 [2024-11-20 11:35:03.544390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.934 [2024-11-20 11:35:03.544407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 [2024-11-20 11:35:03.607984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:55.934 [2024-11-20 11:35:03.608092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.934 [2024-11-20 11:35:03.608128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:55.934 [2024-11-20 11:35:03.608143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.934 [2024-11-20 11:35:03.610951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.934 [2024-11-20 11:35:03.611197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:55.934 [2024-11-20 11:35:03.611308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:55.934 [2024-11-20 11:35:03.611376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:55.934 [2024-11-20 11:35:03.611518] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:55.934 [2024-11-20 11:35:03.611536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.934 [2024-11-20 11:35:03.611564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:55.934 [2024-11-20 11:35:03.611652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:55.934 [2024-11-20 11:35:03.611764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:55.934 [2024-11-20 11:35:03.611780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:55.934 [2024-11-20 11:35:03.611867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:55.934 [2024-11-20 11:35:03.611958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:55.934 [2024-11-20 11:35:03.611977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:55.934 [2024-11-20 11:35:03.612136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.934 pt1 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.934 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.934 "name": "raid_bdev1", 00:22:55.934 "uuid": "3a713b19-9d5e-4fa5-9f48-5b7a12980d69", 00:22:55.934 "strip_size_kb": 0, 00:22:55.934 "state": "online", 00:22:55.934 "raid_level": "raid1", 00:22:55.934 "superblock": true, 00:22:55.934 "num_base_bdevs": 2, 00:22:55.934 "num_base_bdevs_discovered": 1, 00:22:55.934 "num_base_bdevs_operational": 1, 00:22:55.934 "base_bdevs_list": [ 00:22:55.934 { 00:22:55.935 "name": null, 00:22:55.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.935 "is_configured": false, 00:22:55.935 "data_offset": 256, 00:22:55.935 "data_size": 7936 00:22:55.935 }, 00:22:55.935 { 00:22:55.935 "name": "pt2", 00:22:55.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.935 "is_configured": true, 00:22:55.935 "data_offset": 256, 00:22:55.935 "data_size": 7936 00:22:55.935 } 00:22:55.935 ] 00:22:55.935 }' 00:22:55.935 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.935 11:35:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:56.502 [2024-11-20 11:35:04.176655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 3a713b19-9d5e-4fa5-9f48-5b7a12980d69 '!=' 3a713b19-9d5e-4fa5-9f48-5b7a12980d69 ']' 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89108 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89108 ']' 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89108 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89108 00:22:56.502 killing process with pid 89108 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89108' 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89108 00:22:56.502 [2024-11-20 11:35:04.253181] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:56.502 11:35:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89108 00:22:56.502 [2024-11-20 11:35:04.253330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:56.502 [2024-11-20 11:35:04.253439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:56.502 [2024-11-20 11:35:04.253462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:56.760 [2024-11-20 11:35:04.451863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.134 11:35:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:58.134 00:22:58.134 real 0m6.952s 00:22:58.134 user 0m10.947s 00:22:58.134 sys 0m1.060s 00:22:58.134 11:35:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.134 11:35:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:58.134 ************************************ 00:22:58.134 END TEST raid_superblock_test_md_interleaved 00:22:58.134 ************************************ 00:22:58.134 11:35:05 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:58.134 11:35:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:58.134 11:35:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.134 11:35:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:58.134 ************************************ 00:22:58.134 START TEST raid_rebuild_test_sb_md_interleaved 00:22:58.134 ************************************ 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:58.134 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89441 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89441 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89441 ']' 00:22:58.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.135 11:35:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:58.135 [2024-11-20 11:35:05.737799] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:22:58.135 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:58.135 Zero copy mechanism will not be used. 00:22:58.135 [2024-11-20 11:35:05.738216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89441 ] 00:22:58.135 [2024-11-20 11:35:05.924885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.393 [2024-11-20 11:35:06.071320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.652 [2024-11-20 11:35:06.294172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.652 [2024-11-20 11:35:06.294245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.221 BaseBdev1_malloc 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.221 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.221 [2024-11-20 11:35:06.815367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:59.221 [2024-11-20 11:35:06.815765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.221 [2024-11-20 11:35:06.815806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:59.222 [2024-11-20 11:35:06.815828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.222 [2024-11-20 11:35:06.818591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.222 [2024-11-20 11:35:06.818650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:59.222 BaseBdev1 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.222 BaseBdev2_malloc 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.222 [2024-11-20 11:35:06.875398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:59.222 [2024-11-20 11:35:06.875708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.222 [2024-11-20 11:35:06.875751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:59.222 [2024-11-20 11:35:06.875776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.222 [2024-11-20 11:35:06.878521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.222 [2024-11-20 11:35:06.878569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:59.222 BaseBdev2 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.222 spare_malloc 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.222 spare_delay 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.222 [2024-11-20 11:35:06.952331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.222 [2024-11-20 11:35:06.952434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.222 [2024-11-20 11:35:06.952465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:59.222 [2024-11-20 11:35:06.952485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.222 [2024-11-20 11:35:06.955137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.222 [2024-11-20 11:35:06.955186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.222 spare 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.222 [2024-11-20 11:35:06.960388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.222 [2024-11-20 11:35:06.963135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:59.222 [2024-11-20 11:35:06.963412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:59.222 [2024-11-20 11:35:06.963437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:59.222 [2024-11-20 11:35:06.963538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:59.222 [2024-11-20 11:35:06.963812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:59.222 [2024-11-20 11:35:06.963868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:59.222 [2024-11-20 11:35:06.964015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.222 11:35:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.222 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.222 "name": "raid_bdev1", 00:22:59.222 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:22:59.222 "strip_size_kb": 0, 00:22:59.222 "state": "online", 00:22:59.222 "raid_level": "raid1", 00:22:59.222 "superblock": true, 00:22:59.222 "num_base_bdevs": 2, 00:22:59.222 "num_base_bdevs_discovered": 2, 00:22:59.222 "num_base_bdevs_operational": 2, 00:22:59.222 "base_bdevs_list": [ 00:22:59.222 { 00:22:59.222 "name": "BaseBdev1", 00:22:59.222 "uuid": "b7b2c55f-2f8b-539d-a2ba-2038477363df", 00:22:59.222 "is_configured": true, 00:22:59.222 "data_offset": 256, 00:22:59.222 "data_size": 7936 00:22:59.222 }, 00:22:59.222 { 00:22:59.222 "name": "BaseBdev2", 00:22:59.222 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:22:59.222 "is_configured": true, 00:22:59.222 "data_offset": 256, 00:22:59.222 "data_size": 7936 00:22:59.222 } 00:22:59.222 ] 00:22:59.222 }' 00:22:59.222 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.222 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:59.877 [2024-11-20 11:35:07.500997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.877 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.878 [2024-11-20 11:35:07.600629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.878 "name": "raid_bdev1", 00:22:59.878 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:22:59.878 "strip_size_kb": 0, 00:22:59.878 "state": "online", 00:22:59.878 "raid_level": "raid1", 00:22:59.878 "superblock": true, 00:22:59.878 "num_base_bdevs": 2, 00:22:59.878 "num_base_bdevs_discovered": 1, 00:22:59.878 "num_base_bdevs_operational": 1, 00:22:59.878 "base_bdevs_list": [ 00:22:59.878 { 00:22:59.878 "name": null, 00:22:59.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.878 "is_configured": false, 00:22:59.878 "data_offset": 0, 00:22:59.878 "data_size": 7936 00:22:59.878 }, 00:22:59.878 { 00:22:59.878 "name": "BaseBdev2", 00:22:59.878 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:22:59.878 "is_configured": true, 00:22:59.878 "data_offset": 256, 00:22:59.878 "data_size": 7936 00:22:59.878 } 00:22:59.878 ] 00:22:59.878 }' 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.878 11:35:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:00.447 11:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:00.447 11:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.447 11:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:00.447 [2024-11-20 11:35:08.144800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:00.447 [2024-11-20 11:35:08.163010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:00.447 11:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.447 11:35:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:00.447 [2024-11-20 11:35:08.165862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.384 "name": "raid_bdev1", 00:23:01.384 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:01.384 "strip_size_kb": 0, 00:23:01.384 "state": "online", 00:23:01.384 "raid_level": "raid1", 00:23:01.384 "superblock": true, 00:23:01.384 "num_base_bdevs": 2, 00:23:01.384 "num_base_bdevs_discovered": 2, 00:23:01.384 "num_base_bdevs_operational": 2, 00:23:01.384 "process": { 00:23:01.384 "type": "rebuild", 00:23:01.384 "target": "spare", 00:23:01.384 "progress": { 00:23:01.384 "blocks": 2560, 00:23:01.384 "percent": 32 00:23:01.384 } 00:23:01.384 }, 00:23:01.384 "base_bdevs_list": [ 00:23:01.384 { 00:23:01.384 "name": "spare", 00:23:01.384 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:01.384 "is_configured": true, 00:23:01.384 "data_offset": 256, 00:23:01.384 "data_size": 7936 00:23:01.384 }, 00:23:01.384 { 00:23:01.384 "name": "BaseBdev2", 00:23:01.384 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:01.384 "is_configured": true, 00:23:01.384 "data_offset": 256, 00:23:01.384 "data_size": 7936 00:23:01.384 } 00:23:01.384 ] 00:23:01.384 }' 00:23:01.384 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.641 [2024-11-20 11:35:09.324145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.641 [2024-11-20 11:35:09.377844] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:01.641 [2024-11-20 11:35:09.377974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.641 [2024-11-20 11:35:09.378001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.641 [2024-11-20 11:35:09.378022] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.641 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.641 "name": "raid_bdev1", 00:23:01.641 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:01.641 "strip_size_kb": 0, 00:23:01.641 "state": "online", 00:23:01.641 "raid_level": "raid1", 00:23:01.641 "superblock": true, 00:23:01.641 "num_base_bdevs": 2, 00:23:01.641 "num_base_bdevs_discovered": 1, 00:23:01.641 "num_base_bdevs_operational": 1, 00:23:01.641 "base_bdevs_list": [ 00:23:01.641 { 00:23:01.641 "name": null, 00:23:01.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.642 "is_configured": false, 00:23:01.642 "data_offset": 0, 00:23:01.642 "data_size": 7936 00:23:01.642 }, 00:23:01.642 { 00:23:01.642 "name": "BaseBdev2", 00:23:01.642 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:01.642 "is_configured": true, 00:23:01.642 "data_offset": 256, 00:23:01.642 "data_size": 7936 00:23:01.642 } 00:23:01.642 ] 00:23:01.642 }' 00:23:01.642 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.642 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.208 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:02.208 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:02.208 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:02.208 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:02.208 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:02.208 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.209 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.209 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.209 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.209 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.209 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:02.209 "name": "raid_bdev1", 00:23:02.209 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:02.209 "strip_size_kb": 0, 00:23:02.209 "state": "online", 00:23:02.209 "raid_level": "raid1", 00:23:02.209 "superblock": true, 00:23:02.209 "num_base_bdevs": 2, 00:23:02.209 "num_base_bdevs_discovered": 1, 00:23:02.209 "num_base_bdevs_operational": 1, 00:23:02.209 "base_bdevs_list": [ 00:23:02.209 { 00:23:02.209 "name": null, 00:23:02.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.209 "is_configured": false, 00:23:02.209 "data_offset": 0, 00:23:02.209 "data_size": 7936 00:23:02.209 }, 00:23:02.209 { 00:23:02.209 "name": "BaseBdev2", 00:23:02.209 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:02.209 "is_configured": true, 00:23:02.209 "data_offset": 256, 00:23:02.209 "data_size": 7936 00:23:02.209 } 00:23:02.209 ] 00:23:02.209 }' 00:23:02.209 11:35:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:02.209 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:02.209 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:02.468 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:02.468 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:02.468 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.468 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.468 [2024-11-20 11:35:10.076273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:02.468 [2024-11-20 11:35:10.093442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:02.468 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.468 11:35:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:02.468 [2024-11-20 11:35:10.096444] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.438 "name": "raid_bdev1", 00:23:03.438 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:03.438 "strip_size_kb": 0, 00:23:03.438 "state": "online", 00:23:03.438 "raid_level": "raid1", 00:23:03.438 "superblock": true, 00:23:03.438 "num_base_bdevs": 2, 00:23:03.438 "num_base_bdevs_discovered": 2, 00:23:03.438 "num_base_bdevs_operational": 2, 00:23:03.438 "process": { 00:23:03.438 "type": "rebuild", 00:23:03.438 "target": "spare", 00:23:03.438 "progress": { 00:23:03.438 "blocks": 2560, 00:23:03.438 "percent": 32 00:23:03.438 } 00:23:03.438 }, 00:23:03.438 "base_bdevs_list": [ 00:23:03.438 { 00:23:03.438 "name": "spare", 00:23:03.438 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:03.438 "is_configured": true, 00:23:03.438 "data_offset": 256, 00:23:03.438 "data_size": 7936 00:23:03.438 }, 00:23:03.438 { 00:23:03.438 "name": "BaseBdev2", 00:23:03.438 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:03.438 "is_configured": true, 00:23:03.438 "data_offset": 256, 00:23:03.438 "data_size": 7936 00:23:03.438 } 00:23:03.438 ] 00:23:03.438 }' 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:03.438 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=801 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.438 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.697 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.697 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.697 "name": "raid_bdev1", 00:23:03.697 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:03.697 "strip_size_kb": 0, 00:23:03.697 "state": "online", 00:23:03.697 "raid_level": "raid1", 00:23:03.697 "superblock": true, 00:23:03.697 "num_base_bdevs": 2, 00:23:03.697 "num_base_bdevs_discovered": 2, 00:23:03.697 "num_base_bdevs_operational": 2, 00:23:03.697 "process": { 00:23:03.697 "type": "rebuild", 00:23:03.697 "target": "spare", 00:23:03.697 "progress": { 00:23:03.697 "blocks": 2816, 00:23:03.697 "percent": 35 00:23:03.697 } 00:23:03.697 }, 00:23:03.697 "base_bdevs_list": [ 00:23:03.697 { 00:23:03.697 "name": "spare", 00:23:03.697 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:03.697 "is_configured": true, 00:23:03.697 "data_offset": 256, 00:23:03.697 "data_size": 7936 00:23:03.697 }, 00:23:03.697 { 00:23:03.697 "name": "BaseBdev2", 00:23:03.697 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:03.697 "is_configured": true, 00:23:03.697 "data_offset": 256, 00:23:03.697 "data_size": 7936 00:23:03.697 } 00:23:03.697 ] 00:23:03.697 }' 00:23:03.697 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.697 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.697 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.697 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.697 11:35:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.634 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.892 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.892 "name": "raid_bdev1", 00:23:04.892 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:04.892 "strip_size_kb": 0, 00:23:04.892 "state": "online", 00:23:04.892 "raid_level": "raid1", 00:23:04.892 "superblock": true, 00:23:04.892 "num_base_bdevs": 2, 00:23:04.892 "num_base_bdevs_discovered": 2, 00:23:04.892 "num_base_bdevs_operational": 2, 00:23:04.892 "process": { 00:23:04.892 "type": "rebuild", 00:23:04.892 "target": "spare", 00:23:04.892 "progress": { 00:23:04.892 "blocks": 5888, 00:23:04.892 "percent": 74 00:23:04.892 } 00:23:04.892 }, 00:23:04.892 "base_bdevs_list": [ 00:23:04.892 { 00:23:04.892 "name": "spare", 00:23:04.892 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:04.892 "is_configured": true, 00:23:04.892 "data_offset": 256, 00:23:04.892 "data_size": 7936 00:23:04.892 }, 00:23:04.892 { 00:23:04.892 "name": "BaseBdev2", 00:23:04.892 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:04.892 "is_configured": true, 00:23:04.892 "data_offset": 256, 00:23:04.892 "data_size": 7936 00:23:04.892 } 00:23:04.892 ] 00:23:04.892 }' 00:23:04.892 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.892 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.892 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.892 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.892 11:35:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:05.461 [2024-11-20 11:35:13.226291] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:05.461 [2024-11-20 11:35:13.226426] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:05.461 [2024-11-20 11:35:13.226645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.027 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:06.027 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.027 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.027 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.027 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.027 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.028 "name": "raid_bdev1", 00:23:06.028 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:06.028 "strip_size_kb": 0, 00:23:06.028 "state": "online", 00:23:06.028 "raid_level": "raid1", 00:23:06.028 "superblock": true, 00:23:06.028 "num_base_bdevs": 2, 00:23:06.028 "num_base_bdevs_discovered": 2, 00:23:06.028 "num_base_bdevs_operational": 2, 00:23:06.028 "base_bdevs_list": [ 00:23:06.028 { 00:23:06.028 "name": "spare", 00:23:06.028 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:06.028 "is_configured": true, 00:23:06.028 "data_offset": 256, 00:23:06.028 "data_size": 7936 00:23:06.028 }, 00:23:06.028 { 00:23:06.028 "name": "BaseBdev2", 00:23:06.028 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:06.028 "is_configured": true, 00:23:06.028 "data_offset": 256, 00:23:06.028 "data_size": 7936 00:23:06.028 } 00:23:06.028 ] 00:23:06.028 }' 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.028 "name": "raid_bdev1", 00:23:06.028 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:06.028 "strip_size_kb": 0, 00:23:06.028 "state": "online", 00:23:06.028 "raid_level": "raid1", 00:23:06.028 "superblock": true, 00:23:06.028 "num_base_bdevs": 2, 00:23:06.028 "num_base_bdevs_discovered": 2, 00:23:06.028 "num_base_bdevs_operational": 2, 00:23:06.028 "base_bdevs_list": [ 00:23:06.028 { 00:23:06.028 "name": "spare", 00:23:06.028 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:06.028 "is_configured": true, 00:23:06.028 "data_offset": 256, 00:23:06.028 "data_size": 7936 00:23:06.028 }, 00:23:06.028 { 00:23:06.028 "name": "BaseBdev2", 00:23:06.028 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:06.028 "is_configured": true, 00:23:06.028 "data_offset": 256, 00:23:06.028 "data_size": 7936 00:23:06.028 } 00:23:06.028 ] 00:23:06.028 }' 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:06.028 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.286 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.287 "name": "raid_bdev1", 00:23:06.287 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:06.287 "strip_size_kb": 0, 00:23:06.287 "state": "online", 00:23:06.287 "raid_level": "raid1", 00:23:06.287 "superblock": true, 00:23:06.287 "num_base_bdevs": 2, 00:23:06.287 "num_base_bdevs_discovered": 2, 00:23:06.287 "num_base_bdevs_operational": 2, 00:23:06.287 "base_bdevs_list": [ 00:23:06.287 { 00:23:06.287 "name": "spare", 00:23:06.287 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:06.287 "is_configured": true, 00:23:06.287 "data_offset": 256, 00:23:06.287 "data_size": 7936 00:23:06.287 }, 00:23:06.287 { 00:23:06.287 "name": "BaseBdev2", 00:23:06.287 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:06.287 "is_configured": true, 00:23:06.287 "data_offset": 256, 00:23:06.287 "data_size": 7936 00:23:06.287 } 00:23:06.287 ] 00:23:06.287 }' 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.287 11:35:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.576 [2024-11-20 11:35:14.396860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:06.576 [2024-11-20 11:35:14.397176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:06.576 [2024-11-20 11:35:14.397336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:06.576 [2024-11-20 11:35:14.397471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:06.576 [2024-11-20 11:35:14.397493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.576 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.834 [2024-11-20 11:35:14.472848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:06.834 [2024-11-20 11:35:14.472979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.834 [2024-11-20 11:35:14.473018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:06.834 [2024-11-20 11:35:14.473035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.834 [2024-11-20 11:35:14.476002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.834 [2024-11-20 11:35:14.476308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:06.834 [2024-11-20 11:35:14.476425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:06.834 [2024-11-20 11:35:14.476514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:06.834 [2024-11-20 11:35:14.476705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:06.834 spare 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.834 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.834 [2024-11-20 11:35:14.576863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:06.834 [2024-11-20 11:35:14.576959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:06.834 [2024-11-20 11:35:14.577157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:06.834 [2024-11-20 11:35:14.577337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:06.834 [2024-11-20 11:35:14.577352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:06.835 [2024-11-20 11:35:14.577541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.835 "name": "raid_bdev1", 00:23:06.835 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:06.835 "strip_size_kb": 0, 00:23:06.835 "state": "online", 00:23:06.835 "raid_level": "raid1", 00:23:06.835 "superblock": true, 00:23:06.835 "num_base_bdevs": 2, 00:23:06.835 "num_base_bdevs_discovered": 2, 00:23:06.835 "num_base_bdevs_operational": 2, 00:23:06.835 "base_bdevs_list": [ 00:23:06.835 { 00:23:06.835 "name": "spare", 00:23:06.835 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:06.835 "is_configured": true, 00:23:06.835 "data_offset": 256, 00:23:06.835 "data_size": 7936 00:23:06.835 }, 00:23:06.835 { 00:23:06.835 "name": "BaseBdev2", 00:23:06.835 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:06.835 "is_configured": true, 00:23:06.835 "data_offset": 256, 00:23:06.835 "data_size": 7936 00:23:06.835 } 00:23:06.835 ] 00:23:06.835 }' 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.835 11:35:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.402 "name": "raid_bdev1", 00:23:07.402 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:07.402 "strip_size_kb": 0, 00:23:07.402 "state": "online", 00:23:07.402 "raid_level": "raid1", 00:23:07.402 "superblock": true, 00:23:07.402 "num_base_bdevs": 2, 00:23:07.402 "num_base_bdevs_discovered": 2, 00:23:07.402 "num_base_bdevs_operational": 2, 00:23:07.402 "base_bdevs_list": [ 00:23:07.402 { 00:23:07.402 "name": "spare", 00:23:07.402 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:07.402 "is_configured": true, 00:23:07.402 "data_offset": 256, 00:23:07.402 "data_size": 7936 00:23:07.402 }, 00:23:07.402 { 00:23:07.402 "name": "BaseBdev2", 00:23:07.402 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:07.402 "is_configured": true, 00:23:07.402 "data_offset": 256, 00:23:07.402 "data_size": 7936 00:23:07.402 } 00:23:07.402 ] 00:23:07.402 }' 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.402 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.662 [2024-11-20 11:35:15.297884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.662 "name": "raid_bdev1", 00:23:07.662 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:07.662 "strip_size_kb": 0, 00:23:07.662 "state": "online", 00:23:07.662 "raid_level": "raid1", 00:23:07.662 "superblock": true, 00:23:07.662 "num_base_bdevs": 2, 00:23:07.662 "num_base_bdevs_discovered": 1, 00:23:07.662 "num_base_bdevs_operational": 1, 00:23:07.662 "base_bdevs_list": [ 00:23:07.662 { 00:23:07.662 "name": null, 00:23:07.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.662 "is_configured": false, 00:23:07.662 "data_offset": 0, 00:23:07.662 "data_size": 7936 00:23:07.662 }, 00:23:07.662 { 00:23:07.662 "name": "BaseBdev2", 00:23:07.662 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:07.662 "is_configured": true, 00:23:07.662 "data_offset": 256, 00:23:07.662 "data_size": 7936 00:23:07.662 } 00:23:07.662 ] 00:23:07.662 }' 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.662 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:08.229 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.229 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.229 [2024-11-20 11:35:15.898060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.229 [2024-11-20 11:35:15.898668] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:08.229 [2024-11-20 11:35:15.898705] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:08.229 [2024-11-20 11:35:15.898775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.229 [2024-11-20 11:35:15.915529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:08.229 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.229 11:35:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:08.229 [2024-11-20 11:35:15.918329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.164 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.165 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.165 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.165 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:09.165 "name": "raid_bdev1", 00:23:09.165 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:09.165 "strip_size_kb": 0, 00:23:09.165 "state": "online", 00:23:09.165 "raid_level": "raid1", 00:23:09.165 "superblock": true, 00:23:09.165 "num_base_bdevs": 2, 00:23:09.165 "num_base_bdevs_discovered": 2, 00:23:09.165 "num_base_bdevs_operational": 2, 00:23:09.165 "process": { 00:23:09.165 "type": "rebuild", 00:23:09.165 "target": "spare", 00:23:09.165 "progress": { 00:23:09.165 "blocks": 2304, 00:23:09.165 "percent": 29 00:23:09.165 } 00:23:09.165 }, 00:23:09.165 "base_bdevs_list": [ 00:23:09.165 { 00:23:09.165 "name": "spare", 00:23:09.165 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:09.165 "is_configured": true, 00:23:09.165 "data_offset": 256, 00:23:09.165 "data_size": 7936 00:23:09.165 }, 00:23:09.165 { 00:23:09.165 "name": "BaseBdev2", 00:23:09.165 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:09.165 "is_configured": true, 00:23:09.165 "data_offset": 256, 00:23:09.165 "data_size": 7936 00:23:09.165 } 00:23:09.165 ] 00:23:09.165 }' 00:23:09.165 11:35:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.423 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.423 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.424 [2024-11-20 11:35:17.080134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:09.424 [2024-11-20 11:35:17.130274] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:09.424 [2024-11-20 11:35:17.130667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.424 [2024-11-20 11:35:17.130873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:09.424 [2024-11-20 11:35:17.130934] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.424 "name": "raid_bdev1", 00:23:09.424 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:09.424 "strip_size_kb": 0, 00:23:09.424 "state": "online", 00:23:09.424 "raid_level": "raid1", 00:23:09.424 "superblock": true, 00:23:09.424 "num_base_bdevs": 2, 00:23:09.424 "num_base_bdevs_discovered": 1, 00:23:09.424 "num_base_bdevs_operational": 1, 00:23:09.424 "base_bdevs_list": [ 00:23:09.424 { 00:23:09.424 "name": null, 00:23:09.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.424 "is_configured": false, 00:23:09.424 "data_offset": 0, 00:23:09.424 "data_size": 7936 00:23:09.424 }, 00:23:09.424 { 00:23:09.424 "name": "BaseBdev2", 00:23:09.424 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:09.424 "is_configured": true, 00:23:09.424 "data_offset": 256, 00:23:09.424 "data_size": 7936 00:23:09.424 } 00:23:09.424 ] 00:23:09.424 }' 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.424 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.991 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:09.991 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.991 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.991 [2024-11-20 11:35:17.669560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:09.991 [2024-11-20 11:35:17.669942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.991 [2024-11-20 11:35:17.669989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:09.991 [2024-11-20 11:35:17.670010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.991 [2024-11-20 11:35:17.670324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.991 [2024-11-20 11:35:17.670356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:09.991 [2024-11-20 11:35:17.670445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:09.991 [2024-11-20 11:35:17.670472] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:09.991 [2024-11-20 11:35:17.670488] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:09.991 [2024-11-20 11:35:17.670528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:09.991 [2024-11-20 11:35:17.687233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:09.991 spare 00:23:09.991 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.991 11:35:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:09.991 [2024-11-20 11:35:17.690005] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.925 "name": "raid_bdev1", 00:23:10.925 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:10.925 "strip_size_kb": 0, 00:23:10.925 "state": "online", 00:23:10.925 "raid_level": "raid1", 00:23:10.925 "superblock": true, 00:23:10.925 "num_base_bdevs": 2, 00:23:10.925 "num_base_bdevs_discovered": 2, 00:23:10.925 "num_base_bdevs_operational": 2, 00:23:10.925 "process": { 00:23:10.925 "type": "rebuild", 00:23:10.925 "target": "spare", 00:23:10.925 "progress": { 00:23:10.925 "blocks": 2560, 00:23:10.925 "percent": 32 00:23:10.925 } 00:23:10.925 }, 00:23:10.925 "base_bdevs_list": [ 00:23:10.925 { 00:23:10.925 "name": "spare", 00:23:10.925 "uuid": "920a3d79-1cbf-571f-8a4b-bc892c244791", 00:23:10.925 "is_configured": true, 00:23:10.925 "data_offset": 256, 00:23:10.925 "data_size": 7936 00:23:10.925 }, 00:23:10.925 { 00:23:10.925 "name": "BaseBdev2", 00:23:10.925 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:10.925 "is_configured": true, 00:23:10.925 "data_offset": 256, 00:23:10.925 "data_size": 7936 00:23:10.925 } 00:23:10.925 ] 00:23:10.925 }' 00:23:10.925 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.184 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:11.184 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.184 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.184 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:11.184 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.184 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.184 [2024-11-20 11:35:18.863797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.184 [2024-11-20 11:35:18.901447] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:11.184 [2024-11-20 11:35:18.901552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.184 [2024-11-20 11:35:18.901583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:11.184 [2024-11-20 11:35:18.901596] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:11.184 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.185 "name": "raid_bdev1", 00:23:11.185 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:11.185 "strip_size_kb": 0, 00:23:11.185 "state": "online", 00:23:11.185 "raid_level": "raid1", 00:23:11.185 "superblock": true, 00:23:11.185 "num_base_bdevs": 2, 00:23:11.185 "num_base_bdevs_discovered": 1, 00:23:11.185 "num_base_bdevs_operational": 1, 00:23:11.185 "base_bdevs_list": [ 00:23:11.185 { 00:23:11.185 "name": null, 00:23:11.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.185 "is_configured": false, 00:23:11.185 "data_offset": 0, 00:23:11.185 "data_size": 7936 00:23:11.185 }, 00:23:11.185 { 00:23:11.185 "name": "BaseBdev2", 00:23:11.185 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:11.185 "is_configured": true, 00:23:11.185 "data_offset": 256, 00:23:11.185 "data_size": 7936 00:23:11.185 } 00:23:11.185 ] 00:23:11.185 }' 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.185 11:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:11.817 "name": "raid_bdev1", 00:23:11.817 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:11.817 "strip_size_kb": 0, 00:23:11.817 "state": "online", 00:23:11.817 "raid_level": "raid1", 00:23:11.817 "superblock": true, 00:23:11.817 "num_base_bdevs": 2, 00:23:11.817 "num_base_bdevs_discovered": 1, 00:23:11.817 "num_base_bdevs_operational": 1, 00:23:11.817 "base_bdevs_list": [ 00:23:11.817 { 00:23:11.817 "name": null, 00:23:11.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.817 "is_configured": false, 00:23:11.817 "data_offset": 0, 00:23:11.817 "data_size": 7936 00:23:11.817 }, 00:23:11.817 { 00:23:11.817 "name": "BaseBdev2", 00:23:11.817 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:11.817 "is_configured": true, 00:23:11.817 "data_offset": 256, 00:23:11.817 "data_size": 7936 00:23:11.817 } 00:23:11.817 ] 00:23:11.817 }' 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.817 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:11.817 [2024-11-20 11:35:19.619647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:11.817 [2024-11-20 11:35:19.619741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.817 [2024-11-20 11:35:19.619781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:11.817 [2024-11-20 11:35:19.619798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.817 [2024-11-20 11:35:19.620041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.817 [2024-11-20 11:35:19.620063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:11.817 [2024-11-20 11:35:19.620138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:11.818 [2024-11-20 11:35:19.620169] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:11.818 [2024-11-20 11:35:19.620185] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:11.818 [2024-11-20 11:35:19.620201] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:11.818 BaseBdev1 00:23:11.818 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.818 11:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.211 "name": "raid_bdev1", 00:23:13.211 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:13.211 "strip_size_kb": 0, 00:23:13.211 "state": "online", 00:23:13.211 "raid_level": "raid1", 00:23:13.211 "superblock": true, 00:23:13.211 "num_base_bdevs": 2, 00:23:13.211 "num_base_bdevs_discovered": 1, 00:23:13.211 "num_base_bdevs_operational": 1, 00:23:13.211 "base_bdevs_list": [ 00:23:13.211 { 00:23:13.211 "name": null, 00:23:13.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.211 "is_configured": false, 00:23:13.211 "data_offset": 0, 00:23:13.211 "data_size": 7936 00:23:13.211 }, 00:23:13.211 { 00:23:13.211 "name": "BaseBdev2", 00:23:13.211 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:13.211 "is_configured": true, 00:23:13.211 "data_offset": 256, 00:23:13.211 "data_size": 7936 00:23:13.211 } 00:23:13.211 ] 00:23:13.211 }' 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.211 11:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.470 "name": "raid_bdev1", 00:23:13.470 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:13.470 "strip_size_kb": 0, 00:23:13.470 "state": "online", 00:23:13.470 "raid_level": "raid1", 00:23:13.470 "superblock": true, 00:23:13.470 "num_base_bdevs": 2, 00:23:13.470 "num_base_bdevs_discovered": 1, 00:23:13.470 "num_base_bdevs_operational": 1, 00:23:13.470 "base_bdevs_list": [ 00:23:13.470 { 00:23:13.470 "name": null, 00:23:13.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.470 "is_configured": false, 00:23:13.470 "data_offset": 0, 00:23:13.470 "data_size": 7936 00:23:13.470 }, 00:23:13.470 { 00:23:13.470 "name": "BaseBdev2", 00:23:13.470 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:13.470 "is_configured": true, 00:23:13.470 "data_offset": 256, 00:23:13.470 "data_size": 7936 00:23:13.470 } 00:23:13.470 ] 00:23:13.470 }' 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:13.470 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.728 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.728 [2024-11-20 11:35:21.376449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:13.728 [2024-11-20 11:35:21.376742] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:13.728 [2024-11-20 11:35:21.376773] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:13.728 request: 00:23:13.728 { 00:23:13.728 "base_bdev": "BaseBdev1", 00:23:13.728 "raid_bdev": "raid_bdev1", 00:23:13.728 "method": "bdev_raid_add_base_bdev", 00:23:13.728 "req_id": 1 00:23:13.728 } 00:23:13.728 Got JSON-RPC error response 00:23:13.728 response: 00:23:13.729 { 00:23:13.729 "code": -22, 00:23:13.729 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:13.729 } 00:23:13.729 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:13.729 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:13.729 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.729 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.729 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.729 11:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.665 "name": "raid_bdev1", 00:23:14.665 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:14.665 "strip_size_kb": 0, 00:23:14.665 "state": "online", 00:23:14.665 "raid_level": "raid1", 00:23:14.665 "superblock": true, 00:23:14.665 "num_base_bdevs": 2, 00:23:14.665 "num_base_bdevs_discovered": 1, 00:23:14.665 "num_base_bdevs_operational": 1, 00:23:14.665 "base_bdevs_list": [ 00:23:14.665 { 00:23:14.665 "name": null, 00:23:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.665 "is_configured": false, 00:23:14.665 "data_offset": 0, 00:23:14.665 "data_size": 7936 00:23:14.665 }, 00:23:14.665 { 00:23:14.665 "name": "BaseBdev2", 00:23:14.665 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:14.665 "is_configured": true, 00:23:14.665 "data_offset": 256, 00:23:14.665 "data_size": 7936 00:23:14.665 } 00:23:14.665 ] 00:23:14.665 }' 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.665 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.233 "name": "raid_bdev1", 00:23:15.233 "uuid": "a70a8838-770b-411c-a970-cb3c1200d5cd", 00:23:15.233 "strip_size_kb": 0, 00:23:15.233 "state": "online", 00:23:15.233 "raid_level": "raid1", 00:23:15.233 "superblock": true, 00:23:15.233 "num_base_bdevs": 2, 00:23:15.233 "num_base_bdevs_discovered": 1, 00:23:15.233 "num_base_bdevs_operational": 1, 00:23:15.233 "base_bdevs_list": [ 00:23:15.233 { 00:23:15.233 "name": null, 00:23:15.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.233 "is_configured": false, 00:23:15.233 "data_offset": 0, 00:23:15.233 "data_size": 7936 00:23:15.233 }, 00:23:15.233 { 00:23:15.233 "name": "BaseBdev2", 00:23:15.233 "uuid": "90efb0bc-f9ae-5d29-ac65-b47412753057", 00:23:15.233 "is_configured": true, 00:23:15.233 "data_offset": 256, 00:23:15.233 "data_size": 7936 00:23:15.233 } 00:23:15.233 ] 00:23:15.233 }' 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:15.233 11:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.233 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:15.233 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89441 00:23:15.233 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89441 ']' 00:23:15.233 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89441 00:23:15.233 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:15.233 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.233 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89441 00:23:15.492 killing process with pid 89441 00:23:15.492 Received shutdown signal, test time was about 60.000000 seconds 00:23:15.492 00:23:15.492 Latency(us) 00:23:15.492 [2024-11-20T11:35:23.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.492 [2024-11-20T11:35:23.338Z] =================================================================================================================== 00:23:15.492 [2024-11-20T11:35:23.338Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.492 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.492 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.492 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89441' 00:23:15.492 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89441 00:23:15.492 [2024-11-20 11:35:23.080392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:15.492 11:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89441 00:23:15.492 [2024-11-20 11:35:23.080572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:15.492 [2024-11-20 11:35:23.080659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:15.492 [2024-11-20 11:35:23.080681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:15.750 [2024-11-20 11:35:23.356786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:16.771 11:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:23:16.771 00:23:16.771 real 0m18.828s 00:23:16.771 user 0m25.666s 00:23:16.771 sys 0m1.500s 00:23:16.771 ************************************ 00:23:16.771 END TEST raid_rebuild_test_sb_md_interleaved 00:23:16.771 ************************************ 00:23:16.771 11:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.771 11:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:16.771 11:35:24 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:23:16.772 11:35:24 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:23:16.772 11:35:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89441 ']' 00:23:16.772 11:35:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89441 00:23:16.772 11:35:24 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:23:16.772 ************************************ 00:23:16.772 END TEST bdev_raid 00:23:16.772 ************************************ 00:23:16.772 00:23:16.772 real 13m4.142s 00:23:16.772 user 18m27.011s 00:23:16.772 sys 1m45.892s 00:23:16.772 11:35:24 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:16.772 11:35:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.772 11:35:24 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:16.772 11:35:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:16.772 11:35:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:16.772 11:35:24 -- common/autotest_common.sh@10 -- # set +x 00:23:16.772 ************************************ 00:23:16.772 START TEST spdkcli_raid 00:23:16.772 ************************************ 00:23:16.772 11:35:24 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:17.031 * Looking for test storage... 00:23:17.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.031 11:35:24 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.031 --rc genhtml_branch_coverage=1 00:23:17.031 --rc genhtml_function_coverage=1 00:23:17.031 --rc genhtml_legend=1 00:23:17.031 --rc geninfo_all_blocks=1 00:23:17.031 --rc geninfo_unexecuted_blocks=1 00:23:17.031 00:23:17.031 ' 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.031 --rc genhtml_branch_coverage=1 00:23:17.031 --rc genhtml_function_coverage=1 00:23:17.031 --rc genhtml_legend=1 00:23:17.031 --rc geninfo_all_blocks=1 00:23:17.031 --rc geninfo_unexecuted_blocks=1 00:23:17.031 00:23:17.031 ' 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.031 --rc genhtml_branch_coverage=1 00:23:17.031 --rc genhtml_function_coverage=1 00:23:17.031 --rc genhtml_legend=1 00:23:17.031 --rc geninfo_all_blocks=1 00:23:17.031 --rc geninfo_unexecuted_blocks=1 00:23:17.031 00:23:17.031 ' 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.031 --rc genhtml_branch_coverage=1 00:23:17.031 --rc genhtml_function_coverage=1 00:23:17.031 --rc genhtml_legend=1 00:23:17.031 --rc geninfo_all_blocks=1 00:23:17.031 --rc geninfo_unexecuted_blocks=1 00:23:17.031 00:23:17.031 ' 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:17.031 11:35:24 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:23:17.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90127 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90127 00:23:17.031 11:35:24 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90127 ']' 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.031 11:35:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.291 [2024-11-20 11:35:24.945414] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:23:17.291 [2024-11-20 11:35:24.945927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90127 ] 00:23:17.550 [2024-11-20 11:35:25.136527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:17.550 [2024-11-20 11:35:25.290094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.550 [2024-11-20 11:35:25.290130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.486 11:35:26 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.486 11:35:26 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:23:18.486 11:35:26 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:23:18.486 11:35:26 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.486 11:35:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.486 11:35:26 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:23:18.486 11:35:26 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.486 11:35:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.486 11:35:26 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:18.486 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:18.486 ' 00:23:20.432 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:23:20.432 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:23:20.432 11:35:28 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:23:20.432 11:35:28 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.432 11:35:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.432 11:35:28 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:23:20.432 11:35:28 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.432 11:35:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.432 11:35:28 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:23:20.432 ' 00:23:21.369 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:23:21.628 11:35:29 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:23:21.628 11:35:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.628 11:35:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:21.628 11:35:29 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:23:21.628 11:35:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.628 11:35:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:21.628 11:35:29 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:23:21.628 11:35:29 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:23:22.195 11:35:29 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:23:22.195 11:35:29 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:23:22.195 11:35:29 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:23:22.195 11:35:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.195 11:35:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:22.195 11:35:29 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:23:22.195 11:35:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.195 11:35:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:22.195 11:35:29 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:23:22.195 ' 00:23:23.131 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:23:23.405 11:35:31 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:23:23.405 11:35:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.405 11:35:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.405 11:35:31 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:23:23.405 11:35:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.405 11:35:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.405 11:35:31 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:23:23.405 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:23:23.405 ' 00:23:24.805 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:23:24.805 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:23:25.065 11:35:32 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:25.065 11:35:32 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90127 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90127 ']' 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90127 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90127 00:23:25.065 killing process with pid 90127 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90127' 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90127 00:23:25.065 11:35:32 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90127 00:23:27.597 Process with pid 90127 is not found 00:23:27.597 11:35:35 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:23:27.597 11:35:35 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90127 ']' 00:23:27.597 11:35:35 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90127 00:23:27.597 11:35:35 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90127 ']' 00:23:27.597 11:35:35 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90127 00:23:27.597 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90127) - No such process 00:23:27.597 11:35:35 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90127 is not found' 00:23:27.597 11:35:35 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:23:27.597 11:35:35 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:27.597 11:35:35 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:27.597 11:35:35 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:27.597 ************************************ 00:23:27.597 END TEST spdkcli_raid 00:23:27.597 ************************************ 00:23:27.597 00:23:27.597 real 0m10.615s 00:23:27.597 user 0m21.774s 00:23:27.597 sys 0m1.302s 00:23:27.597 11:35:35 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.597 11:35:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:27.597 11:35:35 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:27.597 11:35:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:27.597 11:35:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.597 11:35:35 -- common/autotest_common.sh@10 -- # set +x 00:23:27.597 ************************************ 00:23:27.597 START TEST blockdev_raid5f 00:23:27.597 ************************************ 00:23:27.597 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:27.597 * Looking for test storage... 00:23:27.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:27.597 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.597 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.598 11:35:35 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.598 --rc genhtml_branch_coverage=1 00:23:27.598 --rc genhtml_function_coverage=1 00:23:27.598 --rc genhtml_legend=1 00:23:27.598 --rc geninfo_all_blocks=1 00:23:27.598 --rc geninfo_unexecuted_blocks=1 00:23:27.598 00:23:27.598 ' 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.598 --rc genhtml_branch_coverage=1 00:23:27.598 --rc genhtml_function_coverage=1 00:23:27.598 --rc genhtml_legend=1 00:23:27.598 --rc geninfo_all_blocks=1 00:23:27.598 --rc geninfo_unexecuted_blocks=1 00:23:27.598 00:23:27.598 ' 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.598 --rc genhtml_branch_coverage=1 00:23:27.598 --rc genhtml_function_coverage=1 00:23:27.598 --rc genhtml_legend=1 00:23:27.598 --rc geninfo_all_blocks=1 00:23:27.598 --rc geninfo_unexecuted_blocks=1 00:23:27.598 00:23:27.598 ' 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.598 --rc genhtml_branch_coverage=1 00:23:27.598 --rc genhtml_function_coverage=1 00:23:27.598 --rc genhtml_legend=1 00:23:27.598 --rc geninfo_all_blocks=1 00:23:27.598 --rc geninfo_unexecuted_blocks=1 00:23:27.598 00:23:27.598 ' 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90407 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:27.598 11:35:35 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90407 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90407 ']' 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.598 11:35:35 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.869 11:35:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:27.869 [2024-11-20 11:35:35.575409] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:23:27.869 [2024-11-20 11:35:35.575642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90407 ] 00:23:28.131 [2024-11-20 11:35:35.765791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.131 [2024-11-20 11:35:35.901376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.067 11:35:36 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.067 11:35:36 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:23:29.067 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:23:29.067 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:23:29.067 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:23:29.067 11:35:36 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.067 11:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.067 Malloc0 00:23:29.067 Malloc1 00:23:29.067 Malloc2 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.326 11:35:36 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:23:29.326 11:35:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.326 11:35:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:23:29.326 11:35:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:23:29.326 11:35:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "eb2cc553-f3e7-48e9-8555-0efdb7e644d0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "eb2cc553-f3e7-48e9-8555-0efdb7e644d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "eb2cc553-f3e7-48e9-8555-0efdb7e644d0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "93413681-a9d7-4fb4-b872-c8fb2462b22a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7d07ad63-67cf-4d6e-a488-b516f4c8b266",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "63adcbd3-b999-4b9c-94ef-58cb6d79f06e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:29.326 11:35:37 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:23:29.326 11:35:37 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:23:29.326 11:35:37 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:23:29.326 11:35:37 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90407 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90407 ']' 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90407 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90407 00:23:29.326 killing process with pid 90407 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90407' 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90407 00:23:29.326 11:35:37 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90407 00:23:32.620 11:35:39 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:32.620 11:35:39 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:32.620 11:35:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:32.620 11:35:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.620 11:35:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:32.620 ************************************ 00:23:32.620 START TEST bdev_hello_world 00:23:32.620 ************************************ 00:23:32.620 11:35:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:32.620 [2024-11-20 11:35:39.972783] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:23:32.620 [2024-11-20 11:35:39.972954] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90469 ] 00:23:32.620 [2024-11-20 11:35:40.154774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.620 [2024-11-20 11:35:40.305130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.185 [2024-11-20 11:35:40.900570] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:33.185 [2024-11-20 11:35:40.900663] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:23:33.185 [2024-11-20 11:35:40.900691] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:33.185 [2024-11-20 11:35:40.901310] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:33.185 [2024-11-20 11:35:40.901521] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:33.185 [2024-11-20 11:35:40.901559] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:33.185 [2024-11-20 11:35:40.901665] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:33.185 00:23:33.185 [2024-11-20 11:35:40.901704] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:34.559 ************************************ 00:23:34.559 END TEST bdev_hello_world 00:23:34.559 ************************************ 00:23:34.559 00:23:34.559 real 0m2.488s 00:23:34.559 user 0m2.007s 00:23:34.559 sys 0m0.353s 00:23:34.559 11:35:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.559 11:35:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:34.559 11:35:42 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:23:34.559 11:35:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:34.559 11:35:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.559 11:35:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:34.818 ************************************ 00:23:34.818 START TEST bdev_bounds 00:23:34.818 ************************************ 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:34.818 Process bdevio pid: 90518 00:23:34.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90518 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90518' 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90518 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90518 ']' 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.818 11:35:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:34.818 [2024-11-20 11:35:42.520410] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:23:34.818 [2024-11-20 11:35:42.520841] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90518 ] 00:23:35.077 [2024-11-20 11:35:42.705590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.077 [2024-11-20 11:35:42.858568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.077 [2024-11-20 11:35:42.858711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.077 [2024-11-20 11:35:42.858727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.009 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.009 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:36.009 11:35:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:36.009 I/O targets: 00:23:36.009 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:23:36.009 00:23:36.009 00:23:36.010 CUnit - A unit testing framework for C - Version 2.1-3 00:23:36.010 http://cunit.sourceforge.net/ 00:23:36.010 00:23:36.010 00:23:36.010 Suite: bdevio tests on: raid5f 00:23:36.010 Test: blockdev write read block ...passed 00:23:36.010 Test: blockdev write zeroes read block ...passed 00:23:36.010 Test: blockdev write zeroes read no split ...passed 00:23:36.010 Test: blockdev write zeroes read split ...passed 00:23:36.268 Test: blockdev write zeroes read split partial ...passed 00:23:36.268 Test: blockdev reset ...passed 00:23:36.268 Test: blockdev write read 8 blocks ...passed 00:23:36.268 Test: blockdev write read size > 128k ...passed 00:23:36.268 Test: blockdev write read invalid size ...passed 00:23:36.268 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:36.268 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:36.268 Test: blockdev write read max offset ...passed 00:23:36.268 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:36.268 Test: blockdev writev readv 8 blocks ...passed 00:23:36.268 Test: blockdev writev readv 30 x 1block ...passed 00:23:36.268 Test: blockdev writev readv block ...passed 00:23:36.268 Test: blockdev writev readv size > 128k ...passed 00:23:36.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:36.268 Test: blockdev comparev and writev ...passed 00:23:36.268 Test: blockdev nvme passthru rw ...passed 00:23:36.268 Test: blockdev nvme passthru vendor specific ...passed 00:23:36.268 Test: blockdev nvme admin passthru ...passed 00:23:36.268 Test: blockdev copy ...passed 00:23:36.268 00:23:36.268 Run Summary: Type Total Ran Passed Failed Inactive 00:23:36.268 suites 1 1 n/a 0 0 00:23:36.268 tests 23 23 23 0 0 00:23:36.268 asserts 130 130 130 0 n/a 00:23:36.268 00:23:36.268 Elapsed time = 0.601 seconds 00:23:36.268 0 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90518 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90518 ']' 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90518 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90518 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90518' 00:23:36.268 killing process with pid 90518 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90518 00:23:36.268 11:35:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90518 00:23:37.645 11:35:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:37.645 00:23:37.645 real 0m2.990s 00:23:37.645 user 0m7.373s 00:23:37.645 sys 0m0.472s 00:23:37.645 ************************************ 00:23:37.645 END TEST bdev_bounds 00:23:37.645 ************************************ 00:23:37.645 11:35:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:37.645 11:35:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:37.645 11:35:45 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:37.645 11:35:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:37.645 11:35:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:37.645 11:35:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:37.645 ************************************ 00:23:37.645 START TEST bdev_nbd 00:23:37.645 ************************************ 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:37.645 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90582 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90582 /var/tmp/spdk-nbd.sock 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90582 ']' 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:37.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.646 11:35:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:37.905 [2024-11-20 11:35:45.563505] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:23:37.905 [2024-11-20 11:35:45.563867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.905 [2024-11-20 11:35:45.734503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.164 [2024-11-20 11:35:45.886864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:38.731 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.298 1+0 records in 00:23:39.298 1+0 records out 00:23:39.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279022 s, 14.7 MB/s 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:39.298 11:35:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:39.298 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:39.298 { 00:23:39.298 "nbd_device": "/dev/nbd0", 00:23:39.298 "bdev_name": "raid5f" 00:23:39.298 } 00:23:39.298 ]' 00:23:39.298 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:39.298 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:39.298 { 00:23:39.298 "nbd_device": "/dev/nbd0", 00:23:39.298 "bdev_name": "raid5f" 00:23:39.298 } 00:23:39.298 ]' 00:23:39.298 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:39.556 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:39.556 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:39.556 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:39.556 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:39.556 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:39.556 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.556 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:39.815 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.074 11:35:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:23:40.334 /dev/nbd0 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.334 1+0 records in 00:23:40.334 1+0 records out 00:23:40.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394588 s, 10.4 MB/s 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:40.334 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:40.593 { 00:23:40.593 "nbd_device": "/dev/nbd0", 00:23:40.593 "bdev_name": "raid5f" 00:23:40.593 } 00:23:40.593 ]' 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:40.593 { 00:23:40.593 "nbd_device": "/dev/nbd0", 00:23:40.593 "bdev_name": "raid5f" 00:23:40.593 } 00:23:40.593 ]' 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:40.593 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:40.921 256+0 records in 00:23:40.921 256+0 records out 00:23:40.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00731675 s, 143 MB/s 00:23:40.921 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:40.921 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:40.921 256+0 records in 00:23:40.921 256+0 records out 00:23:40.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0441426 s, 23.8 MB/s 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.922 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:41.180 11:35:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:41.444 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:41.713 malloc_lvol_verify 00:23:41.713 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:41.973 4e2bce5f-9d80-4c02-953e-53056ec4f0ec 00:23:41.973 11:35:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:42.233 7ddf86d0-adaf-4d4b-9795-cb5274b77cd6 00:23:42.233 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:42.493 /dev/nbd0 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:42.493 mke2fs 1.47.0 (5-Feb-2023) 00:23:42.493 Discarding device blocks: 0/4096 done 00:23:42.493 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:42.493 00:23:42.493 Allocating group tables: 0/1 done 00:23:42.493 Writing inode tables: 0/1 done 00:23:42.493 Creating journal (1024 blocks): done 00:23:42.493 Writing superblocks and filesystem accounting information: 0/1 done 00:23:42.493 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:42.493 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:43.060 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90582 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90582 ']' 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90582 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90582 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.061 killing process with pid 90582 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90582' 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90582 00:23:43.061 11:35:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90582 00:23:44.439 11:35:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:44.439 00:23:44.439 real 0m6.671s 00:23:44.439 user 0m9.551s 00:23:44.439 sys 0m1.389s 00:23:44.439 11:35:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.439 11:35:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:44.439 ************************************ 00:23:44.439 END TEST bdev_nbd 00:23:44.439 ************************************ 00:23:44.439 11:35:52 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:23:44.439 11:35:52 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:23:44.439 11:35:52 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:23:44.439 11:35:52 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:23:44.439 11:35:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.439 11:35:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.439 11:35:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:44.439 ************************************ 00:23:44.439 START TEST bdev_fio 00:23:44.439 ************************************ 00:23:44.439 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:44.439 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:44.440 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.440 11:35:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:44.440 ************************************ 00:23:44.440 START TEST bdev_fio_rw_verify 00:23:44.440 ************************************ 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:44.698 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:44.699 11:35:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:44.957 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:44.957 fio-3.35 00:23:44.957 Starting 1 thread 00:23:57.197 00:23:57.197 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90793: Wed Nov 20 11:36:03 2024 00:23:57.197 read: IOPS=8493, BW=33.2MiB/s (34.8MB/s)(332MiB/10001msec) 00:23:57.197 slat (usec): min=24, max=159, avg=28.80, stdev= 3.54 00:23:57.197 clat (usec): min=15, max=479, avg=187.51, stdev=67.94 00:23:57.197 lat (usec): min=44, max=536, avg=216.31, stdev=68.50 00:23:57.197 clat percentiles (usec): 00:23:57.197 | 50.000th=[ 192], 99.000th=[ 310], 99.900th=[ 375], 99.990th=[ 420], 00:23:57.197 | 99.999th=[ 482] 00:23:57.197 write: IOPS=8931, BW=34.9MiB/s (36.6MB/s)(344MiB/9870msec); 0 zone resets 00:23:57.197 slat (usec): min=12, max=1262, avg=23.47, stdev= 6.62 00:23:57.197 clat (usec): min=83, max=1742, avg=431.96, stdev=57.00 00:23:57.197 lat (usec): min=105, max=1768, avg=455.43, stdev=58.70 00:23:57.197 clat percentiles (usec): 00:23:57.197 | 50.000th=[ 437], 99.000th=[ 586], 99.900th=[ 701], 99.990th=[ 1045], 00:23:57.197 | 99.999th=[ 1745] 00:23:57.197 bw ( KiB/s): min=32528, max=37872, per=98.52%, avg=35197.47, stdev=1572.96, samples=19 00:23:57.197 iops : min= 8132, max= 9468, avg=8799.37, stdev=393.24, samples=19 00:23:57.197 lat (usec) : 20=0.01%, 50=0.01%, 100=5.98%, 250=31.25%, 500=59.71% 00:23:57.197 lat (usec) : 750=3.03%, 1000=0.02% 00:23:57.197 lat (msec) : 2=0.01% 00:23:57.197 cpu : usr=98.68%, sys=0.42%, ctx=26, majf=0, minf=7389 00:23:57.197 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:57.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.197 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.197 issued rwts: total=84942,88157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.197 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:57.197 00:23:57.197 Run status group 0 (all jobs): 00:23:57.197 READ: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=332MiB (348MB), run=10001-10001msec 00:23:57.197 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=344MiB (361MB), run=9870-9870msec 00:23:57.455 ----------------------------------------------------- 00:23:57.455 Suppressions used: 00:23:57.455 count bytes template 00:23:57.455 1 7 /usr/src/fio/parse.c 00:23:57.455 582 55872 /usr/src/fio/iolog.c 00:23:57.455 1 8 libtcmalloc_minimal.so 00:23:57.455 1 904 libcrypto.so 00:23:57.455 ----------------------------------------------------- 00:23:57.455 00:23:57.714 00:23:57.714 real 0m13.040s 00:23:57.714 user 0m13.315s 00:23:57.714 sys 0m0.913s 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:57.714 ************************************ 00:23:57.714 END TEST bdev_fio_rw_verify 00:23:57.714 ************************************ 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:57.714 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "eb2cc553-f3e7-48e9-8555-0efdb7e644d0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "eb2cc553-f3e7-48e9-8555-0efdb7e644d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "eb2cc553-f3e7-48e9-8555-0efdb7e644d0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "93413681-a9d7-4fb4-b872-c8fb2462b22a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7d07ad63-67cf-4d6e-a488-b516f4c8b266",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "63adcbd3-b999-4b9c-94ef-58cb6d79f06e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:57.715 /home/vagrant/spdk_repo/spdk 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:57.715 00:23:57.715 real 0m13.259s 00:23:57.715 user 0m13.415s 00:23:57.715 sys 0m1.008s 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.715 11:36:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:57.715 ************************************ 00:23:57.715 END TEST bdev_fio 00:23:57.715 ************************************ 00:23:57.715 11:36:05 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:57.715 11:36:05 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:57.715 11:36:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:57.715 11:36:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.715 11:36:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:57.715 ************************************ 00:23:57.715 START TEST bdev_verify 00:23:57.715 ************************************ 00:23:57.715 11:36:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:57.973 [2024-11-20 11:36:05.565674] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:23:57.973 [2024-11-20 11:36:05.565833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90953 ] 00:23:57.973 [2024-11-20 11:36:05.748930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:58.232 [2024-11-20 11:36:05.904396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.232 [2024-11-20 11:36:05.904396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.799 Running I/O for 5 seconds... 00:24:00.668 11409.00 IOPS, 44.57 MiB/s [2024-11-20T11:36:09.888Z] 11354.50 IOPS, 44.35 MiB/s [2024-11-20T11:36:10.823Z] 11974.00 IOPS, 46.77 MiB/s [2024-11-20T11:36:11.758Z] 12538.00 IOPS, 48.98 MiB/s [2024-11-20T11:36:11.758Z] 12495.40 IOPS, 48.81 MiB/s 00:24:03.912 Latency(us) 00:24:03.912 [2024-11-20T11:36:11.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.912 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:03.912 Verification LBA range: start 0x0 length 0x2000 00:24:03.912 raid5f : 5.02 6236.75 24.36 0.00 0.00 30794.27 125.67 25856.93 00:24:03.912 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:03.912 Verification LBA range: start 0x2000 length 0x2000 00:24:03.912 raid5f : 5.01 6246.26 24.40 0.00 0.00 30825.53 318.37 25976.09 00:24:03.912 [2024-11-20T11:36:11.758Z] =================================================================================================================== 00:24:03.912 [2024-11-20T11:36:11.758Z] Total : 12483.00 48.76 0.00 0.00 30809.90 125.67 25976.09 00:24:05.289 00:24:05.289 real 0m7.301s 00:24:05.289 user 0m13.398s 00:24:05.289 sys 0m0.294s 00:24:05.289 11:36:12 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.289 11:36:12 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:05.289 ************************************ 00:24:05.289 END TEST bdev_verify 00:24:05.289 ************************************ 00:24:05.289 11:36:12 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:05.289 11:36:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:05.289 11:36:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.289 11:36:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:05.289 ************************************ 00:24:05.289 START TEST bdev_verify_big_io 00:24:05.289 ************************************ 00:24:05.289 11:36:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:05.289 [2024-11-20 11:36:12.922664] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:24:05.290 [2024-11-20 11:36:12.922848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91046 ] 00:24:05.290 [2024-11-20 11:36:13.110994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:05.549 [2024-11-20 11:36:13.275658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.549 [2024-11-20 11:36:13.275684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.117 Running I/O for 5 seconds... 00:24:08.027 506.00 IOPS, 31.62 MiB/s [2024-11-20T11:36:17.252Z] 634.00 IOPS, 39.62 MiB/s [2024-11-20T11:36:18.190Z] 654.33 IOPS, 40.90 MiB/s [2024-11-20T11:36:19.125Z] 634.50 IOPS, 39.66 MiB/s [2024-11-20T11:36:19.383Z] 647.00 IOPS, 40.44 MiB/s 00:24:11.537 Latency(us) 00:24:11.537 [2024-11-20T11:36:19.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.537 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:11.537 Verification LBA range: start 0x0 length 0x200 00:24:11.537 raid5f : 5.40 328.66 20.54 0.00 0.00 9614072.64 231.80 428962.91 00:24:11.537 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:11.537 Verification LBA range: start 0x200 length 0x200 00:24:11.537 raid5f : 5.29 336.00 21.00 0.00 0.00 9473216.67 234.59 419430.40 00:24:11.537 [2024-11-20T11:36:19.383Z] =================================================================================================================== 00:24:11.537 [2024-11-20T11:36:19.384Z] Total : 664.67 41.54 0.00 0.00 9543605.02 231.80 428962.91 00:24:12.915 00:24:12.915 real 0m7.835s 00:24:12.915 user 0m14.368s 00:24:12.915 sys 0m0.341s 00:24:12.915 11:36:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.916 11:36:20 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:12.916 ************************************ 00:24:12.916 END TEST bdev_verify_big_io 00:24:12.916 ************************************ 00:24:12.916 11:36:20 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:12.916 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:12.916 11:36:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.916 11:36:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:12.916 ************************************ 00:24:12.916 START TEST bdev_write_zeroes 00:24:12.916 ************************************ 00:24:12.916 11:36:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:13.174 [2024-11-20 11:36:20.823521] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:24:13.174 [2024-11-20 11:36:20.823715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91143 ] 00:24:13.433 [2024-11-20 11:36:21.027985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.433 [2024-11-20 11:36:21.175572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.999 Running I/O for 1 seconds... 00:24:15.374 18975.00 IOPS, 74.12 MiB/s 00:24:15.374 Latency(us) 00:24:15.374 [2024-11-20T11:36:23.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.374 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:15.374 raid5f : 1.01 18948.19 74.02 0.00 0.00 6727.01 2129.92 9055.88 00:24:15.374 [2024-11-20T11:36:23.220Z] =================================================================================================================== 00:24:15.374 [2024-11-20T11:36:23.220Z] Total : 18948.19 74.02 0.00 0.00 6727.01 2129.92 9055.88 00:24:16.750 00:24:16.750 real 0m3.576s 00:24:16.750 user 0m3.079s 00:24:16.750 sys 0m0.355s 00:24:16.750 11:36:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.750 11:36:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:16.750 ************************************ 00:24:16.750 END TEST bdev_write_zeroes 00:24:16.750 ************************************ 00:24:16.750 11:36:24 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:16.750 11:36:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:16.750 11:36:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.750 11:36:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:16.750 ************************************ 00:24:16.750 START TEST bdev_json_nonenclosed 00:24:16.750 ************************************ 00:24:16.750 11:36:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:16.750 [2024-11-20 11:36:24.471205] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:24:16.750 [2024-11-20 11:36:24.471348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91202 ] 00:24:17.009 [2024-11-20 11:36:24.647012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.009 [2024-11-20 11:36:24.802825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.009 [2024-11-20 11:36:24.802969] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:17.009 [2024-11-20 11:36:24.803020] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:17.009 [2024-11-20 11:36:24.803039] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:17.267 00:24:17.267 real 0m0.755s 00:24:17.268 user 0m0.494s 00:24:17.268 sys 0m0.153s 00:24:17.268 11:36:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.268 11:36:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:17.268 ************************************ 00:24:17.268 END TEST bdev_json_nonenclosed 00:24:17.268 ************************************ 00:24:17.526 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:17.526 11:36:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:17.526 11:36:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.526 11:36:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:17.526 ************************************ 00:24:17.526 START TEST bdev_json_nonarray 00:24:17.526 ************************************ 00:24:17.526 11:36:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:17.527 [2024-11-20 11:36:25.261995] Starting SPDK v25.01-pre git sha1 c0b2ac5c9 / DPDK 24.03.0 initialization... 00:24:17.527 [2024-11-20 11:36:25.262177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91229 ] 00:24:17.786 [2024-11-20 11:36:25.446741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.786 [2024-11-20 11:36:25.599823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.786 [2024-11-20 11:36:25.600004] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:17.786 [2024-11-20 11:36:25.600045] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:17.786 [2024-11-20 11:36:25.600079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:18.353 00:24:18.353 real 0m0.744s 00:24:18.353 user 0m0.478s 00:24:18.353 sys 0m0.159s 00:24:18.353 11:36:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.353 11:36:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:18.353 ************************************ 00:24:18.353 END TEST bdev_json_nonarray 00:24:18.353 ************************************ 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:24:18.353 11:36:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:24:18.353 00:24:18.353 real 0m50.711s 00:24:18.353 user 1m8.907s 00:24:18.353 sys 0m5.504s 00:24:18.353 11:36:25 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.353 11:36:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:18.353 ************************************ 00:24:18.353 END TEST blockdev_raid5f 00:24:18.353 ************************************ 00:24:18.353 11:36:26 -- spdk/autotest.sh@194 -- # uname -s 00:24:18.353 11:36:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:18.353 11:36:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:18.353 11:36:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:18.353 11:36:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:18.353 11:36:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.353 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:24:18.353 11:36:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:18.353 11:36:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:18.353 11:36:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:18.354 11:36:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:18.354 11:36:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:18.354 11:36:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:18.354 11:36:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:18.354 11:36:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.354 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:24:18.354 11:36:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:18.354 11:36:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:18.354 11:36:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:18.354 11:36:26 -- common/autotest_common.sh@10 -- # set +x 00:24:20.285 INFO: APP EXITING 00:24:20.285 INFO: killing all VMs 00:24:20.285 INFO: killing vhost app 00:24:20.285 INFO: EXIT DONE 00:24:20.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:20.285 Waiting for block devices as requested 00:24:20.285 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:20.543 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:21.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:21.111 Cleaning 00:24:21.111 Removing: /var/run/dpdk/spdk0/config 00:24:21.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:21.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:21.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:21.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:21.111 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:21.111 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:21.111 Removing: /dev/shm/spdk_tgt_trace.pid56787 00:24:21.111 Removing: /var/run/dpdk/spdk0 00:24:21.111 Removing: /var/run/dpdk/spdk_pid56552 00:24:21.111 Removing: /var/run/dpdk/spdk_pid56787 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57021 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57125 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57176 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57309 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57333 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57538 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57649 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57756 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57878 00:24:21.111 Removing: /var/run/dpdk/spdk_pid57986 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58031 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58068 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58138 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58227 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58697 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58772 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58846 00:24:21.111 Removing: /var/run/dpdk/spdk_pid58866 00:24:21.111 Removing: /var/run/dpdk/spdk_pid59014 00:24:21.111 Removing: /var/run/dpdk/spdk_pid59041 00:24:21.111 Removing: /var/run/dpdk/spdk_pid59189 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59206 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59276 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59294 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59360 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59383 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59585 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59616 00:24:21.370 Removing: /var/run/dpdk/spdk_pid59705 00:24:21.370 Removing: /var/run/dpdk/spdk_pid61080 00:24:21.370 Removing: /var/run/dpdk/spdk_pid61297 00:24:21.370 Removing: /var/run/dpdk/spdk_pid61443 00:24:21.370 Removing: /var/run/dpdk/spdk_pid62097 00:24:21.370 Removing: /var/run/dpdk/spdk_pid62320 00:24:21.370 Removing: /var/run/dpdk/spdk_pid62460 00:24:21.370 Removing: /var/run/dpdk/spdk_pid63120 00:24:21.370 Removing: /var/run/dpdk/spdk_pid63455 00:24:21.370 Removing: /var/run/dpdk/spdk_pid63601 00:24:21.370 Removing: /var/run/dpdk/spdk_pid65014 00:24:21.370 Removing: /var/run/dpdk/spdk_pid65267 00:24:21.370 Removing: /var/run/dpdk/spdk_pid65417 00:24:21.370 Removing: /var/run/dpdk/spdk_pid66820 00:24:21.370 Removing: /var/run/dpdk/spdk_pid67083 00:24:21.370 Removing: /var/run/dpdk/spdk_pid67224 00:24:21.370 Removing: /var/run/dpdk/spdk_pid68637 00:24:21.370 Removing: /var/run/dpdk/spdk_pid69095 00:24:21.370 Removing: /var/run/dpdk/spdk_pid69235 00:24:21.370 Removing: /var/run/dpdk/spdk_pid70750 00:24:21.370 Removing: /var/run/dpdk/spdk_pid71017 00:24:21.370 Removing: /var/run/dpdk/spdk_pid71170 00:24:21.370 Removing: /var/run/dpdk/spdk_pid72681 00:24:21.370 Removing: /var/run/dpdk/spdk_pid72951 00:24:21.370 Removing: /var/run/dpdk/spdk_pid73097 00:24:21.370 Removing: /var/run/dpdk/spdk_pid74605 00:24:21.370 Removing: /var/run/dpdk/spdk_pid75103 00:24:21.370 Removing: /var/run/dpdk/spdk_pid75249 00:24:21.370 Removing: /var/run/dpdk/spdk_pid75401 00:24:21.370 Removing: /var/run/dpdk/spdk_pid75850 00:24:21.370 Removing: /var/run/dpdk/spdk_pid76620 00:24:21.370 Removing: /var/run/dpdk/spdk_pid77003 00:24:21.370 Removing: /var/run/dpdk/spdk_pid77703 00:24:21.370 Removing: /var/run/dpdk/spdk_pid78188 00:24:21.370 Removing: /var/run/dpdk/spdk_pid78975 00:24:21.370 Removing: /var/run/dpdk/spdk_pid79405 00:24:21.370 Removing: /var/run/dpdk/spdk_pid81393 00:24:21.370 Removing: /var/run/dpdk/spdk_pid81849 00:24:21.370 Removing: /var/run/dpdk/spdk_pid82302 00:24:21.370 Removing: /var/run/dpdk/spdk_pid84428 00:24:21.370 Removing: /var/run/dpdk/spdk_pid84919 00:24:21.370 Removing: /var/run/dpdk/spdk_pid85446 00:24:21.370 Removing: /var/run/dpdk/spdk_pid86523 00:24:21.370 Removing: /var/run/dpdk/spdk_pid86856 00:24:21.370 Removing: /var/run/dpdk/spdk_pid87813 00:24:21.370 Removing: /var/run/dpdk/spdk_pid88147 00:24:21.370 Removing: /var/run/dpdk/spdk_pid89108 00:24:21.370 Removing: /var/run/dpdk/spdk_pid89441 00:24:21.370 Removing: /var/run/dpdk/spdk_pid90127 00:24:21.370 Removing: /var/run/dpdk/spdk_pid90407 00:24:21.370 Removing: /var/run/dpdk/spdk_pid90469 00:24:21.370 Removing: /var/run/dpdk/spdk_pid90518 00:24:21.370 Removing: /var/run/dpdk/spdk_pid90778 00:24:21.370 Removing: /var/run/dpdk/spdk_pid90953 00:24:21.370 Removing: /var/run/dpdk/spdk_pid91046 00:24:21.370 Removing: /var/run/dpdk/spdk_pid91143 00:24:21.370 Removing: /var/run/dpdk/spdk_pid91202 00:24:21.370 Removing: /var/run/dpdk/spdk_pid91229 00:24:21.370 Clean 00:24:21.370 11:36:29 -- common/autotest_common.sh@1453 -- # return 0 00:24:21.370 11:36:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:21.370 11:36:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.370 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:24:21.628 11:36:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:21.628 11:36:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.628 11:36:29 -- common/autotest_common.sh@10 -- # set +x 00:24:21.628 11:36:29 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:21.628 11:36:29 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:21.628 11:36:29 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:21.628 11:36:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:21.628 11:36:29 -- spdk/autotest.sh@398 -- # hostname 00:24:21.628 11:36:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:21.887 geninfo: WARNING: invalid characters removed from testname! 00:24:48.452 11:36:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:51.795 11:36:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:55.079 11:37:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:57.611 11:37:05 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:00.145 11:37:07 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:03.430 11:37:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:05.994 11:37:13 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:05.994 11:37:13 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:05.994 11:37:13 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:05.994 11:37:13 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:05.994 11:37:13 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:05.994 11:37:13 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:05.994 + [[ -n 5207 ]] 00:25:05.994 + sudo kill 5207 00:25:06.001 [Pipeline] } 00:25:06.015 [Pipeline] // timeout 00:25:06.019 [Pipeline] } 00:25:06.031 [Pipeline] // stage 00:25:06.035 [Pipeline] } 00:25:06.045 [Pipeline] // catchError 00:25:06.053 [Pipeline] stage 00:25:06.054 [Pipeline] { (Stop VM) 00:25:06.063 [Pipeline] sh 00:25:06.340 + vagrant halt 00:25:10.530 ==> default: Halting domain... 00:25:15.811 [Pipeline] sh 00:25:16.093 + vagrant destroy -f 00:25:20.281 ==> default: Removing domain... 00:25:20.293 [Pipeline] sh 00:25:20.575 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:25:20.586 [Pipeline] } 00:25:20.602 [Pipeline] // stage 00:25:20.608 [Pipeline] } 00:25:20.623 [Pipeline] // dir 00:25:20.629 [Pipeline] } 00:25:20.645 [Pipeline] // wrap 00:25:20.652 [Pipeline] } 00:25:20.666 [Pipeline] // catchError 00:25:20.677 [Pipeline] stage 00:25:20.680 [Pipeline] { (Epilogue) 00:25:20.697 [Pipeline] sh 00:25:20.983 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:27.578 [Pipeline] catchError 00:25:27.581 [Pipeline] { 00:25:27.596 [Pipeline] sh 00:25:27.876 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:27.876 Artifacts sizes are good 00:25:27.884 [Pipeline] } 00:25:27.899 [Pipeline] // catchError 00:25:27.910 [Pipeline] archiveArtifacts 00:25:27.917 Archiving artifacts 00:25:28.018 [Pipeline] cleanWs 00:25:28.030 [WS-CLEANUP] Deleting project workspace... 00:25:28.030 [WS-CLEANUP] Deferred wipeout is used... 00:25:28.035 [WS-CLEANUP] done 00:25:28.037 [Pipeline] } 00:25:28.054 [Pipeline] // stage 00:25:28.060 [Pipeline] } 00:25:28.081 [Pipeline] // node 00:25:28.087 [Pipeline] End of Pipeline 00:25:28.125 Finished: SUCCESS